doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/57052 (DOI)
Hello, FOSDEM. My name is Eric Harris-Brown. I am the co-founder of Holochain and Holo. And I'm assuming that you, the audience, are open source software developers, which gives me some footing on how to share what I want to share today, which is a demonstration of a tool that's built on Holochain that explores the space of what I think of as the hard problem, that hard problem being collaboration. So my solution to hard problems generally looks like this. Find the grammar that represents in some way a hard problem space and let distributed exploration of expressions in that space using the grammar answer the questions that are hard. So that's the punch line. That's where we're going to get to. Finding the grammar that represents this hard problem space and creating expressions in that grammar to explore that space, the space being collaboration. Open source software development is full of grammars that are examples of this level up of this solution to the hard problem of collaboration. Things like release early, release often, lots of eyes on the problem, forking, pull requests, blah, blah, blah, blah, blah. We know the power in open source of the level up in collaboration. But we also know that open source software development struggles with another aspect of one of the hard problems or hard parts of the problem of collaboration. And that is social coherence. Open source software's openness creates divergence, which is a power, but convergence and coherence can be difficult. And in fact, this is where my core passion lies and is at the core and the root of Holochain as a tool. Holochain is a platform for building distributed social coherence by making it easy to write social games where all players can validate that other players are playing by the rules or more simply, it's a grammar for social pattern creation. Well, that might sound good in general, but what about in specific? So what I'm here to demonstrate is a specific use case of building social coherence. So let's say you're playing a game, you're playing a game, you're playing a game, you're playing a game, you're playing a game, you're playing a game. What I'm here to demonstrate is a specific use case of building social coherence. And that is a program I call where. Where uses the grammatics of Holochain to provide a social game that answers the question where this is a key question for building social coherence in teams. And how do I do that in where? Well, I do it by the same pattern, which is creating a higher level grammar for expressing the spaces and marking locations in those spaces that a team might want to increase its shared held context about. So let's jump into the demo. What I've got here are three agent instances running on the Holochain runtime and being rendered and displayed in through a web browser on local host. So remember that what where is helping us do is helping a team be able to share context. The simplest context, the simplest map in space is physical location. So imagine a simple check-in at the beginning of a team where you want to know where everybody comes from, where their place of birth is in the world. So I, given that I was born in South America, I would place my avatar near where that was born. Another person, let's say Herbert here, may have been born in the UK. So maybe they put themselves in the UK and they can already see where I am. And then the final person, Jane, who knows? Where is Jane from? Perhaps Melbourne. So we'll put Jane there. This becomes instantly visible on everybody else's instance of the application. Physical location is the simplest and perhaps the least interesting one, although often quite valuable when you're trying to do sort things out like time zones. You could have a time zone map here instead to figure those things out. But for example, let's think about a different space. Like let's say what you wanted to do in your team is get the shared context of where you are emotionally as a team and especially do that kind of check-in on a regular basis. Well, I'm opening up the spaces tab and where we can see other spaces. I have preloaded an emotions map where I might say, well, I'm right now. My emotional state at this moment is quite curious to see how this talk is going to end up landing. So I might put myself here. Meanwhile, Herbert might say, eh, on the team, I'm actually a little grumpy. I didn't really want to come here. So I'm going to be, I'm a little let down maybe. And Jane, on the other hand, let's move to Jane's tab and open up her window. She might be kind of surprised that, wow, this is really quite an interesting thing. I didn't know that it could happen. So I'm going to be, I'm going to put myself as Jane here between somewhere between amazed and excited. There are all kinds of other conceptual spaces that teams can use to know and share context and come into more coherence. Here's an interesting one. It's the iron triangle of cost, quality and time. Oftentimes it teams fight against each other because they don't know that they're actually fighting for something that's valuable to the team. And if you had a shared understanding of that, it might be easier to listen to why somebody is arguing for some particular quality, some particular aspect of a project. So for example, I tend to be interested in getting things done quickly and quality. Well, you know, I think you can always iterate. So I would put myself over here. And did you notice we have a different window popping up because you can do more than just put your avatar in a particular location. This is where we're getting into the grammatics of location inside of space. Here what this particular marker allows you to do is type in some tag and let's say that what we're saying is the reason why. So why for my location would be something like you can always iterate iterate. Oh, there's probably two T's and iterate. All right, whereas Herbert might say, let's go to the iron triangle for Herbert, Herbert might say, now I'm stand for making things done, doing it, doing it cheaply because our budget is low, limited budget. And Jane, she might be interested in quality because why would you do it if it's not great. So here we're beginning to see some of the grammatics that's possible inside this tool. So what is the grammar of where, of being able to answer that question and share context well. We've already seen a couple things. We've seen spaces and locations in those spaces. So let's take a look at how we can use those and create those elements grammatically in where. The first thing we want to be able to do, of course, is create new spaces. So obviously we want to be able to render different images, different surfaces in the 2D space where we get to click and place locations. So let's just go and let's do a Google search for matrix of good and evil. We're going to have a little fun here. And let's take a look at some more of these images and pick one that seems like it might be amusing. Like how about this one right here? Let's copy this image link, right? And then we're going to pop back into where and we're going to create a new space. I'm going to click on new space. Now this one is the DND, the Dungeons and Dragons alignment matrix. And the URL. Well let's take a preview of it. This looks like it'll be pretty good. So this is an easy way. Bam! You've created a new surface, a new 2D map to be able to place things in. But let's take a little look at the kinds of markers that we can put on there. This is also part of the grammatics of where. Well we've already seen you can put an avatar. And we already saw that you can put tagging. You can enable marker tagging about what it is that you're, some text that you might want to associate with the mark in your location. And you can do other interesting things like you can either display that on the surface or not, have it be hovered over. And you can create predefined tags, which is another grammatical element that one can do to be able to generate interesting uses of this tool. But for now, let's just stick with this one just so that we can see how well it works. We're going to use the avatar again. And we're going to allow for tags. So I'm going to create that one. And here we go. What am I? Am I lawful good? Neutral good? I have no idea. I just pretend that I am a true neutral like an ant. And so I would put myself up here. And it's interesting to ask what would be the question that you would be associating with this that you might want to tag? I think maybe when? When are you neutral good? Well I'm neutral good at work, but not at other times. So I might put that in there. So my avatar is on there and I pop over it and I can see something. And Herbert will need to refresh and reload to pull this information out of the DHT. If you're Hall of Change savvy, you can see how that works. And I can click on it and well, you know, Herbert is actually tends more over towards chaotic evil because it's fun. Another example of the grammatics that are afforded by the where application is as in open source software development forking. So for example, let's take a look at Earth. We had here place of birth, but somebody might say, you know what, I like this, but I want to add to it something else more than just where you are, but some aspect of yourself. So I'm going to go in, I'm going to fork this space and it brings up a copy of the space with all the settings as you had from before. And we're going to change it. I'm going to call this birthplace plus location. Place no, not location, plus zodiac, plus zodiac. And you'll see how this is an interesting use also of the grammatics of the avatar. One of the kinds of avatars that you, one of the kinds of markers that you can pick is an emoji subset. And what I have here, a pre-designed one is a subset of the emojis that exist, which are the zodiac signs. So in this particular version of the game, you would instead of just putting in your place of birth, when you were playing the game of shared alignment, you might put in the, you might choose your zodiac sign. So I'm going to choose the one that looks like holo because I think that's fine. And so it's not my face that goes down, but in fact that emoji that goes down. Another bit of grammatics that we've added into where is what we call iterations. So in this case, I showed an emotion map that you would play one time, but we're going to fork this space. And what we're going to do is we're going to click over in the iterations tab. You can create iterations, which are basically tabs that collect up the markers, the locations over time. And you can do this a couple of different ways. You can have predefined ones. So for example, you might have three different scenarios that you might want to see what one's emotional reaction would be. So an example might be success, failure, middle of the road. And in this case, when you create this map, you've got three different scenarios in which you would locate your emotions. The other thing that you can do with that is you can create a tab that will be create a new tab every day. So you might want to have something like that for a standup. But the key point that I'm trying to share here is how creating grammatical affordances allows the possibility to explore a question, a concept that creates social coherence. And my hope is that you can see how that's beginning to go here. Because all of you are coders, or so is my supposition, developers here at Bosdem, there's one other level of grammatics that I think is kind of fun. And it's a little techy. The user experience isn't so great at it. But that is templates. So templates allows you to create actual SVG or HTML files that then you can fill out the values of. So for example, that's what this project triangle, the iron triangle comes from. So I'll give you an example here. So if I create a space, I can flip the template to the iron triangle template. And what you see inside here is one, two, three different parameters. So this might be some other set of qualities that you're interested in having a sense of shared context around rather than cost, quality and time. I don't know, something like good, true and beautiful. And so we would put them in here, the good of the true and the beautiful, good, true, beautiful. And you can see that that will create a new triangle using those values. How is that done? Well, I'm going to fork this template just to give you a view of how that's done. This is just an SVG file in which there are some, some spot in the SVG file has a percent percent with the template name, the field name that should be rendered. You saw when we created the template, there was param one, param two, param three. You could template whatever you want in here. And you can also do this kind of template with HTML, with a canvas and with SVG for doing very complex templating for interesting surfaces that you might want to fill out or make it easy for other people to fill out variations on. As you can see, we filled out here in the triangle. And that's a different kind and a different level of grammatics for creating types of spaces that we might want to know our location in. As another example, we created one that is a quadrant box, right? So here, this is a very standard quadrant people like to do up and down with the left-axis, right-axis, and you can fill that out. And this particular template is also, I think this one is done also using SVG. Okay. So how can you try this yourself in Holochain? Well, it's super easy. All you have to do is two different things. You can go to GitHub, Holochain Launcher Releases, and this brings you to a place where you can download the Holochain runtime. So let's click the latest one. You would simply scroll down under Assets and download the launcher for your particular environment that you want to use. And let me show you what that looks like when you've downloaded it and you've run it. The other thing you want to do is you want to go to Wear, which is github.com, Lightning Rod Labs, Wear Releases, and you want to download the very latest version of Wear. Again under the Assets, you will download this file called Wear.Webhap. Again, I will show you how you run this inside the launcher and what that looks like. Once you've launched the launcher, what you should see is a window like this. And if you have downloaded the Webhap file, you click on the Install New App button. It should bring up the page in your Downloads folder where you will have downloaded it too. You can just double click on it. Here you'll see an install app ID. The fun thing to do and the important thing to do is type in a unique name that you share with the people who you're going to be working with in your team so that you create a unique network. Otherwise, you'll be talking with everybody in the world. So here we might do something like Fosdown. Install the app. And there you have your app up and running. You can click it, you can open it by just clicking on Open and it will launch the app in a browser. You can then serve from local host and you might type in your avatar. Here you can see the avatars I was using for the test. And I will use Herbert's avatar and type in the name again. Herbert. And here I have the app opening up with a bunch of the default templates that we have in the system. Many more than the ones that you saw in the example before that you can play with. So that's how you can get up and running. If you do this with a group of people, you will have connected in a peer-to-peer way playing this social game. So the final thing I want to share is some questions about what is different about playing this kind of social game, i.e. this web app, in an environment where there was no server, where deployment was just what you saw. I think there's something important about the practicality of it, the reduced barrier to entry that anybody can install the runtime and then simply download the web app file for the DNA and the user interface and run it. And the full game, the socially connected game happens in a peer-to-peer way. That I think is pretty interesting, trivial deployment. And the second thing is the safety that occurs inside that space because of the fact that you know that there is no third party other than the people who are playing the game where the data is going. It's only going among the people who are playing that game, not some central place where the data can be pulled off and used for other purposes. So I think that's pretty interesting and may have some pretty far-reaching consequences. I hope you've enjoyed this collaboration teaser of where it's built as an experiment in the Grammatics of Collaboration. And I look forward to hearing from you and learning what kind of collaboration tools and games you want to create. Thank you.
A playground for group self-awareness (awhereness?) on holochain Groups, especially remote collaborative groups, often lack contextual information about collaborators that makes working together harder. Co-locating oneself across a number of spaces in the context of a group (or groups) provides an important avenue for improving both sense-making and working together. Where provides a generalized grammar for creating shared maps for groups to see the emergent "whereness" of each other, as well as the grammatics to self-evolve these spaces and how to represent "location" in them.
10.5446/56986 (DOI)
you Good morning. My name is Dianne and like most of us here today, I've been involved with a variety of free software. I mostly train traditionally as an architect in Australia and I work in the building industry. So when I say architect, I'm referring to the brick and mortar kind, not the software architect kind. So today I'll be talking about free software building information modeling or BIM and how it relates to architecture, engineering and construction. So you guys have probably heard of architectural scale projects that have used free software to design and build them. And if you haven't, I'd like to highlight two which are super awesome and you should totally check them out after this talk. And the first you should check out is the WikiHealth project where you can build your own almost flat pack house for roughly $2,000 per square meter. And here's one example of a WikiHealth based project done in FreeCAD called the WikiLand project. The second one is the open source ecology project. And this guy made an open hardware design to create a brick press. And from that he created a whole suite of blueprints to manufacture and build all the machines for societal function like a bulldozer or cement mixers. And right now he's working on the micro house project, which is a modular housing design where you can build housing for $200 per square meter. And that's a factor of ten cheaper than the WikiHealth. But unfortunately today the guys behind those awesome projects are not going to be presenting and you're stuck with me. So I'm going to talk about free software in medium to large scale architectural projects instead. So on a domestic or small scale project, usually it's easier to use some free software because the team is smaller and you can revert to traditional 2D CAD or use hand calculations for the engineering side. But when you start scaling up and we're talking, you know, hospitals and laboratories and shopping centers and mixed use urban developments, this is a little bit of a different story. And that this scale we're dealing with many thousands of drawings produced by multiple companies who are leaving and entering contracts. And this whole process usually goes across multiple years. So when we talk about large building projects, pretty much everybody relies on proprietary software. And most people haven't even heard of free software. The software vendor market is dominated by Autodesk Monopoly and the digital data that's created is all stored in proprietary data formats. The industry knows this. The users know this and they don't like it. But in fact, just last year there were over 100 UK architecture firms which signed an open letter about the dismal quality of proprietary software addressed to Autodesk. But these firms can write a letter but they have nowhere to go. There's no alternative to switch to. And this represents a huge opportunity for expanding the scope of free software. But before I go into some of the super cool free software that's recently being developed, I'd like to paint a picture of just how fragmented and diverse the industry is. And by the end of this talk, I'd like to communicate that to develop free software enough to deliver large scale architecture, there are three things which need to happen. The first is that we need so much more than just CAD. We need a huge variety of tools to cover all the different tasks that need to happen when a large building comes together. And I'll talk a little bit about the different disciplines involved in making a large building to help illustrate this. The second is that we need to collaborate a lot more and integrate free software together. So from the first point, yeah, we need a lot of different software, but we need the software to be interoperable. Otherwise, we can't achieve the workflows needed to manage our built environment. And for this, I'll talk a bit about open data standards and what free software is available for doing that. And finally, we need a bit more community building because most of the industry doesn't know that free software is an option here. And so for the free software that's already mature, we just need to let people know about it more and share what we know. And then when we start working on the stuff that's not yet mature, we need a really big room for people to talk to each other. The guy writing code needs to sit next to the guy laying bricks, and he needs to sit next to the guy running an energy simulation. And this all needs to be fed back to a guy who's writing tutorials on how to do all this stuff. So we really need a vibrant community that's not specific to a single software or not specific to a single discipline, but across multiple software and across disciplines. So let's start with what disciplines are involved in a project. And one of those is this guy, of course, and he needs software which does this stuff, and that's all right, because there's free software which does this kind of stuff. But architects don't just do this. They actually use a lot of artistic tools that overlap with the CG, VFX or gaming industry. So software that's not at all to do with CAD, like Krita or the GIMP or even gaming engines like Godot, are actually really important to their workflow. So hopefully you can see that just the architect already needs a wide variety of software. That's much more than just CAD. And on a large project, there are even more requirements. And this is just one discipline. So we need a huge amount of tools to support just an architect's workflow. And the tools we might think that free software already has covered might actually not be practical on a large project. To give an example, CAD tools need to generate 2D drawings from 3D models. And this is a feature which exists in free software like FreeCAD, but it might not be able to scale to a large project. Because on large projects, you need to combine five or six models together. And each one of those models is produced in a different software, and each one contains tens of thousands of objects that we're managing gigabytes of geometry, and maybe it's not there yet. And then there's stuff like asset registers. For a building owner, an asset register is much more valuable than a fancy 3D CAD model. So we need spreadsheet or document management or CRUD type applications, not just CAD. Or for another example, something like PDF markups. It seems simple, but it's actually really fundamental to our workflows. And there's no free software that I'm aware of which implements the full measurement annotations in the PDF spec. And then of course there's things like 3D PDFs, which is a problem which probably won't go away, but hopefully we can try and ignore it as long as possible. But that's just architects, and architects don't work alone. They work with all of these other guys, all these other designers, and well, because they're all doing kind of design stuff, there's some overlap in the tools that they need. But obviously each one has their own quirks and showstoppers that determine whether or not the software is suitable for them. But there are also other groups of disciplines. And when you cross to another group, the tools overlap a lot less. Here we need a lot less CAD stuff and we need more GIS and surveying and site feasibility design stuff. And just like architects have specific requirements when you apply it to a large project, it's exactly the same when you look at these guys. So even though we might have general free software, let's say for laser scanning or point cloud reconstruction, we could still be missing key features which are a showstopper. Like the ability to segment the point cloud into building objects like walls and columns and compare them object by object to a 3D model. And of course, buildings don't happen without engineers. And most of these engineers, I'm not going to pretend that I know what they need, but I will just highlight the sustainability consultants. Because these guys need simulation engines for energy, lighting, and CFD. The good news is that we've got really amazing free software which does that like Radiance, Energy Plus, and Open Foam. But the bad news is that you need a PhD in reading manuals to use them and we could do a huge amount better on the UX side of things. So although these functions are the gold standard, it's really difficult for users to use. And then there's also stuff that's really important for the built environment on a larger scale that's got nothing to do with CAD. So things like climate change projection analysis, or open data standards on how to track material lifecycle impacts, or supply chain management software to deal with modern slavery. So here's a few more groups of people who might get involved in a large project. And this is obviously not a comprehensive list, but I just wanted to highlight just how diverse the software is that we need. So here's a few final interesting examples of free software that already exists, but you might not consider you will need it when you design a building. Like explosion simulation, which is needed by security consultants when you're doing a defense building. Or visual node programming, which lets architects generate building shapes. Or real-time point cloud capture for certifying construction. So all of the software ideally also needs to collaborate and integrate and interoperate to make a building happen. And this is really important because large projects are increasingly relying on digital workflows rather than reading from traditional 2D paper printouts. But the reality is that right now everybody is stuck on proprietary software with proprietary data formats, so the tools don't work very well outside each walled garden of each vendor. And they don't share data. They don't really follow international open data standards very well. And so the majority of the industry still works in isolation. And this brings me to my second point about the improved integration of free software in our built environment. So when we attempt to build software across these very diverse disciplines who need to collaborate, international open data standards play a huge role. And these standards revolve around a concept called building information modeling, or BIM. And the way this BIM concept works is that instead of just having geometry and CAD layers, you now have a semantic database of objects like walls and doors and windows. And these objects may have geometry associated with them, or they may not. But most importantly, they hold relationships like what room they're part of, or the fire rating, or when they're going to be built in the construction sequence, or which organization is liable for its performance. And these relationships are special because they extend across disciplines. So when we integrate free software for the built environment, we need to integrate not just by sharing geometry, but these relationships and properties that make the geometry meaningful to each discipline. So because most of you have seen FreeCAD before, here are some screenshots of the BIM functionality inside FreeCAD. And you can see how certain relationships are exposed to the architectural discipline. And I just want to highlight that BIM data can get really quite complex. It can include things like simulations or construction timelines or linking to building sensors. Most of these BIM features you see in FreeCAD are based on a vendor-agnostic international open data standard for BIM, known as the Industry Foundation Classes, or IFC. So all the data that FreeCAD is adding to the building model can be taken out of FreeCAD and analyzed in other software. So for example, you can use Codeaster to perform the structural analysis or a tool called IFC-Coby to create the maintainable assets registry. By ensuring that free software implementations comply with these open data standards, this really helps improve interoperability and provides a way for proprietary users to start incrementally switching to free software. Although all this BIM and IFC stuff is quite a niche topic, it's increasingly a fundamental topic that our industry relies on for a built environment. So many governments are now mandating BIM technologies and projects, and the majority of large developments rely on BIM. And this is really exciting because the free software implementations of BIM standards are actually much, much further ahead compared to proprietary solutions. And FreeCAD actually has one of the best support for BIM data standards in the industry, and to do this, FreeCAD uses a library I'd like to introduce called IFC OpenShell. IFC OpenShell started roughly 10 years ago. It's a C++ library based on OpenCascade, and it lets you read, write, and analyze this IFC-based BIM data in a variety of formats. It's got about 85 contributors, there are a few core developers, and over 600 stars on GitHub. It's also used under the hood in these tech startups, and it's starting to be introduced in university courses. To give you an idea of how IFC OpenShell works, here's some previews of the Python bindings, which you can use to create data and relationships. But data and relationships is only half of what IFC OpenShell does. The other half is about geometry creation. So geometry in the AC industry is really, really varied. So for example, you can use solid modeling, and that's really good for modeling reinforcement bar or steel framing or basic walls. But then sometimes you'll also use meshes, which are really good for heritage reconstruction or conceptual modeling or archfiz. And sometimes you want to use really specific objects because you want to quickly derive data about it, like I-beams or square sections, or things like rail alignment curves, which have really specific constraints that vary from country to country. So no geometry kernel, of course, has an I-beam of primitive data structure, but this is how our industry thinks. IFC OpenShell provides a really good set of tools to tessellate and expose these geometries across different applications in a standardized manner. Another free software that uses IFC OpenShell I'd like to introduce is a more recent one called the Blender BIM add-on. And this provides BIM functionality as an extension to Blender. The Blender BIM add-on also recently won the BuildingSmart 2020 awards in tech, and this is the highest award available in the field of BIM. And this kind of recognizes the potential that free software has to become the norm in this industry, not the obsession. So I'd like to show you some cool things of just how IFC OpenShell can turn Blender, which I guess is traditionally more an artist's tool, into something that we can design and manufacture with. So the image here that you see is modeled in Blender, but because it follows BIM open data standards, we can bring it over into proprietary software. And even though it looks kind of artistic and sketchy and conceptual, it actually contains enough semantic data to start scheduling out components. So here's another example. All the buildings you see are actually generated from some evolutionary algorithm, which understands things like solar access and ideal circulation and comfortable areas and volumes. And it's not just for fun. This is actually really, really useful for doing feasibility studies, where you don't need fully detailed designs, but you do need to test out spaces and spatial relationships. In this example, you're looking at my living room, and this is not a photograph. It's actually a render. But unlike other renders, every object in this image is not just geometry. It actually is semantically classified and has BIM relationships. So all of this data can be accessed in Blender or FreeCAD or even the proprietary software that's currently being used. And to prove that this data is semantic, this rendering is actually not a traditional CG render, where you use a color picker and you pick up textures and adjust the lighting. This is a validated lighting simulation. So data from material samples were numerically input and semantically assigned to make sure that it's not just photorealistic, but it's also photometrically correct. In case you're curious, this uses radiance, which is a free soccer engine for doing lighting simulation. And on the left, you can see the render, and on the right, you can see a photograph. So a more common example of light simulation is in solar analysis. So here we have a visualization of sun positions throughout the year and a heat map of sun hours across an analysis period. This uses another free soccer called Ladybug Tools and SphurJoc, which allows users to do visual programming within the applications. And this is really exciting because it encourages users who may not know how to code to put together little pieces of software themselves, which introduces them to the flexibility that free soccer gives to users. Here's another example where you have a beam modeled in Blender and by using IFC OpenShall, the BIM data was then translated into a structural model for a codaster. But because it uses this open data standard, the model could have equally well come from FreeCAD, so you can use the best tool for the job. And of course, here we have drawings. We need drawings to be generated from BIM models. So again, what you're seeing here are not traditional drawings where you might draw a line than polygons manually. These are automatically cut and generated from 3D objects and semantic data, and it's generated directly from the open data standard. So you can create this model with Blender or FreeCAD or whatever you want, which is actually what happens in the industry because people need to use different tools. And all of the annotations can be generated agnostic of the authoring application. And here's another example, which is not just about geometry. Because of all of the rich data you get in your BIM model, you can convert contractual requirements into standardized unit tests. So you can start doing QA checks when you have two or more different disciplines collaborating. So one thing I'd like to highlight about what I've shown you so far is that a lot of this is based on an open data standard. And so a lot of the code can be shared between many CAD offering applications, as long as the output is a standards compliant BIM dataset. So this unit test auditing tool actually works equally in Blender and in FreeCAD, same with the drawing generation. It works equally in Blender and FreeCAD, and the same with the structural simulation or the environmental simulation. All the code can be reused. And the authoring application is just really an interface to portions of the metadata as well as geometry. And this is really exciting because you can start mixing different tools with what they're really good at, like solid and nerds, modeling and FreeCAD and meshes in Blender. So I hope I managed to show you a few cool things, but there's really so much more which needs to be done. So at the beginning of the presentation, I showed all of these various disciplines and a lot of their use cases are still not yet covered. So here's a small arbitrary kind of incomplete list of things that we still probably need to be developing. And hopefully over time we'll start covering more and more of these use cases. So one of the things which prompted this presentation today is that in the past year, an amazing amount of work has happened in creating FreeSoftware for our industry. And it's really exciting to see it happen. And one of the signs that shows that this has happened is that if you rewind a few years ago, many people in the AEC industry didn't know what FreeSoftware was. But just last year, a new community called OSRJ, or open source architecture, started up. And although it says architecture, it's really about the whole built environment and covers all these disciplines. And before this community existed, we didn't really have a community for people to discuss FreeSoftware that was across all of these disciplines. Instead, we had people discussing and doing really great stuff, but in these pockets in the industry. And in just under a year, we're now at over 700 members and there's a Wiki and a forum and a news site and a growing collection of articles about how to start adopting these open data standards and how to start switching to more free software. There's also an IRC channel where there's usually 20 or 30 of us online. But most importantly, the people involved in OSRJ are not just developers. They are people who are working in the industry and this really makes a huge amount of difference because it helps connect both users and developers across multiple projects on real life workflows. And the practical result of this is that there's a lot more code sharing and communication between developers. So to end today's talk, here are some links and credits of the really cool recent stuff that's been happening, as well as how to check out the OSRJ community. So thanks for watching and let's help change the industry. So, Dion, I think that we are going to be starting here now with the Q&A. So, first of all, thank you for an absolutely amazing talk. This kind of is really opening some of the areas of the CAD dev room. I'd like to start by asking you a couple additional questions that came up about how this interoperability can be expanded. Is it just a matter of if the software package wants to make itself useful to the BIM world, is all they have to do be able to digest and import, export this IFC standard that you were describing? Or is there more to it? Yeah, I think part of it is being able to speak the same language. I just want to clarify that it won't necessarily be an import-export because it's not, we're not talking about a file format here, we're talking about a schema and set of relationships. So as long as everybody standardizes the way that they record certain relationships, then at the very least we'll be able to interoperate on some of the basic workflows. Of course there'll be niche topics, but at least we can cooperate a bit better. Okay, no, that makes sense. So one question here from Julian Todd is, so what do you see as the necessary elements to bringing some of the larger civil engineering companies into the open software or the free and open software area in terms of their dollar contribution? So they have a lot invested, obviously an individual project can be a strong driver there for them. So what's the impediment and how do we go about clearing that? My background is actually as an architect, so I would be the wrong guy to answer this question. But that's what OSR just wants. Please ask it again over there, but I do also feel, and I could be totally off the mark here, that you're absolutely right, that in terms of civil engineering there are less free software available and the maturity level is much lower. Whereas for architects, it's a lot further ahead. And then each discipline is a bit plus or minus like cost planning, it's also very much behind the times. So I think one by one the disciplines will start to catch up. And as soon as everybody starts needing to speak the same language, that's when the incentive will grow for people to play to start adopting, but at the very least not just free software but something that can interoperate with free software, because that's the current state of the ecosystem that we're stuck in proprietary silos. So the moment we can start speaking the same language, we can incrementally switch discipline by discipline. And so related to that, how well do these, these proprietary systems speak the, the IFC relationship model. And generally, not very well. But it's the, and I think that's just symptomatic of how are the culture of the AC industry, because it's so diverse it's so fragmented. And it's the least bad thing is the best way to describe it. This is the closest we've ever got to agreeing on on some sort of standard. But there is a trend that people are seeing that it's growing and growing extremely quickly, especially being pulled by clients and big government clients who say no, we don't want a proprietary data set after throwing all this money at this kind of dollar project that will expire in a few years. So it's really being client driven that, whether they like it or not, people will need to deliver contractually high quality of this open data standard. Well, that's, that's an interesting point. So if you, if you were to project ahead, are we looking at the, the evolution of these, of these open BIM projects being client driven so clients specifying and perhaps even putting their funds where that where that is, is this something that is on the radar of clients or are we still in the infancy where we were not quite advanced enough for that to become a topic that even comes up between, between customers and and the say the architecture field. I think on a already significant number of large scale projects, clients are contractually requiring this data. But the issue is that nobody has the technical means, due to the proprietary ecosystem to, to work out whether the client is actually what they've asked for. So it's turning into a bit of a checkbox exercise. So everybody's saying, oh, look, we're doing BIM, we're doing data, look at all the data we're generating, but in reality, all the data is pretty garbage, they just don't know it yet. But slowly people are being clued into this. And so when you give the clients the tools to inspect, they'll say, hold on. And there I have witnessed many scenarios where fees have been withheld simply because they, they got the smarts to look at it and say, hold on, you know, this isn't quite right. And so that will be a client side poll. But I think it's also happening to the people in the industry, like the architects and the engineers, speaking from the perspective of architects, they really do mean well, they want to create really neat data sets because it helps themselves. It's just that they don't have the tools. There isn't currently a free software equivalent that's as mature as the commercial offerings for large scale projects at the moment. But the moment that something starts coming close, I think we'll start seeing the smaller firms, people doing commercial products start switching over, and that will sort of snowball, I believe. That, that makes sense as we, as we get more coming in and more mature soft, more mature software, everyone, everyone starts to coalesce around the most useful aspect in the room, which brings me to one one point that I didn't quite see in your talk is what sort of, what sort of legal compliance do does the, the open software packages need to need to provide in this so say we're doing simulations, for example, where we are actually trying to generate data that is used to show that the building won't fall down in a high gust of wind. The, that with the larger packages, you kind of have this idea that no one ever got fired for, you know, for buying IBM or Autodesk or that is. So, how do you see that, how do you see that hurdle being addressed in the open software field. From the perspective of architects, we don't get paid based off our performance. I mean, I mean, if the building looks kind of funny, it's like, oh, well, anyway, I digress. I guess, all we're talking about here is a way to interoperate a bit more that's what BIM is about it doesn't replace the fact that there are many, many sub disciplines with their own standards of quality to, to assure. So it doesn't change the fact that you're still using FEA or you're still using a particular light simulation engine. All those are totally agnostic of the open BIM concept. So, yeah, hi. Well, Dionne, thank you so much. This is this has been absolutely fantastic. I'm hoping that let's see. Wayne, did you catch any additional questions here I'm, I don't see any now. So, Dionne, would you like to leave us with a with with a closing thought. Well, just I guess another shout out to us arch.org because that's where a lot of people are helping like from incredibly diverse backgrounds coming together to say, Hey, you know, let's let's talk about a new ecosystem of tools that talk to each other. And so please check that out so that we can continue to build stuff. The free CAD guys are there. And so, come along to excellent, excellent, fantastic. And we'll post the link here for the for this chat so as soon as our schedule moves over which I think, oh, I was off by five minutes again. So, we, we have a little, a little extra, a little extra time here so sorry about that I was trying to trying to wrap up wrap up the Q&A session a little bit, a little bit early, they're not going to open the chat room here until we're until we actually move through the Q&A time so let's, let's see the inside of open arch. What, what your, what you're kind of describing is this collaborative meshing of different of different platforms and you had a slide up that showed all of the different niche, niches that maybe haven't been addressed at all. And those is their room for developers who might be interested in this to kind of come in and build a library that, that talks IFC with the, with the larger community inside that niche and then kind of what, what does it take for them to, to tie into this, this overall ecosystem. I think that's the beauty of it because there are just so many things that are lacking so there's so many things that you can work on, and you don't need to be a guru in anything although if you are, then all the better. So, there are of course well known projects like Libre DWG and the 2D CAD world, which we still see a lot of use in our industry. And unfortunately, it's still just not quite there yet. So, and then you'll get really niche things like we need better web viewers, or we need little utilities, which just run more audits, or we need simple types of spreadsheet generation things and people to build templates for costing on different places of the world. And so, there's very simple crud applications as some of them so there's a huge variety of tasks which I think, no matter how good your your coding abilities are or even not just coding just like your knowledge of local standards and, and just trying to becoming a power user is extremely, extremely useful. That's an interesting, interesting point because in a lot of jurisdictions in at least within the United States the building standards are not generally available. At least as a, as a an open system so how, how would you, I mean this is perhaps a little left field but how do how do. Thank you.
BIM (Building Information Modeling) is a paradigm for 3D CAD models made for Architecture, Engineering and Construction (AEC). Long a closed, proprietary garden, it becomes more and more an open, hackable world thanks to several Free and Open-Source tools and formats. This talk will try to illustrate how rich that world has become when your tinkering, hacking, coding itch starts to scratch... In this talk, Dion (developer of BlenderBIM) and Yorik (developer of FreeCAD, specifically its BIM tools) will use these two applications and try to show some clever tricks that you can do with BIM models, that no proprietary software would dream to achieve.
10.5446/56988 (DOI)
Hello everyone, my name is Urkan Bruhin, I am the founder and main developer of LibrePCB. Today I will give you a short update about the LibrePCB project. For those who don't know LibrePCB yet, it is an open source software to draw schematics and to design PCBs. It is cross-platform and runs on most computers, Windows, Linux, macOS and other operating systems and actually it even runs on ARM CPUs. The user interface aims to be intuitive and easy to use, so even without any knowledge about LibrePCB you can create a PCB very quickly, but at the same time it is also intended for professional users. It has a powerful library concept which allows to save a lot of time due to high re-usability. The file format is human-readable and optimized for version control systems. And generally speaking we focus a little bit more on usability and stability of the tool rather than on bleeding edge features. Two years ago I had the last project update talk at 4STEM20. Since then we released three new versions of LibrePCB which add many new features and improvements. Last year we also switched from the QMAKE build system to CMAKE and refactored the software architecture a little bit to keep the project maintainable and future-proof. I will not mention every improvement here, for that you could read the full change look on our website. Instead I picked just a few of these improvements which I will show you here. One of the improvements is a new unified and more powerful number input field which is now used across the whole user interface. Everywhere you can enter in length value, you can now enter mathematical expressions which are then evaluated. So you don't need a calculator anymore when drawing a footprint from a datasheet. In addition the unit can now also be changed by a context menu so you can quickly switch between metric and imperial units. LibrePCB now also has a DXF import in the symbol editor, footprint editor and in the board editor. So for example if you already have the board outline as a mechanical drawing, you don't have to redraw it in LibrePCB, you can simply import it. In addition we now also have a pick and place export to allow automated assembly of the PCBs. The last feature I want to mention is a completely new way to start ordering the PCB. Generating Garber and Exxon files is still a challenge, at least sometimes and especially for beginners. It's not easy to understand how to generate correct production data files, especially since every PCB manufacturer has slightly different requirements on the production data format. For example regarding file naming, file extension, Garber format version, Merch or Split drill files, there are a lot of things you could do the wrong way. Therefore we integrated a much simpler way to order the PCB directly from within the application. A shopping cart symbol in the board editor and in the schematic editor opens a dialog which quickly explains what this feature does. And with one click you could upload the project, which is then opened in the web browser. And with one more click the project is forwarded to Aisler to finish the order. Let's quickly demonstrate how this works. Let's say you have now finished your design. You could now generate fabrication data the normal way with Garber files and Exxon files. But this is quite complicated and sometimes a little bit annoying. So especially for beginners it is much easier to use this feature. And the dialog contains a link to the website which will give you a little bit more information what this feature does exactly. But we will now simply upload the project. Then the web browser is opened. Here we even see some electrical rule check messages because our design had some issues. I forgot to fix. So this is a reminder to fix the ERC messages first. The design rule check is not yet run on server side. But there is at least a reminder that you should do it locally before ordering the PCB. And in future we will also integrate it into this website. Then we can continue by forwarding the project to Aisler. And here we are. You can see already the price. You can choose your configuration and we have a preview of the PCB. Now you can just enter the order details, shipping address and so on. Do the payment and that's it. You have ordered the PCB. So many thanks to Aisler who does not only provide an API for this service but even makes a donation to Libre PCB for every order made with this feature. So actually using this feature is not only an easy and fast way to order the PCB but it is also a very simple way to support the Libre PCB project. Now the next release of Libre PCB will also contain an Eagle library port feature which allows converting your Eagle libraries to Libre PCB. And a new feature rich PDF and image export and print dialog for schematics and PDFs including a live preview. Now let's take a look at things that happened within the last two years beside the new features. One cool thing is that the number of installation packages is constantly increasing. Despite our official installers and portable packages we provide since the first release Libre PCB is now available as well as a flat pack and a snap package. In addition the community packages Libre PCB to many different distributions for example Arch Linux, FreeBSD and NixOS. That's pretty cool. Thank you very much to the package maintainers for maintaining these packages. Due to the snap package on snapcraft Libre PCB is now even available out of the box in the Ubuntu Software Center. So probably this is now the easiest way to install Libre PCB on Ubuntu. Translations into languages other than English are continuously growing as well. At the moment Libre PCB is available at least partially in 13 different languages contributed by 42 translators. A big thank you to the translators for your work. Now what's the overall state of this project? The Libre management and the Libre editor are working very well and are fully usable. However, part number management is not available yet. The schematic editor is also working very well but there is no support yet for hierarchical schematics, buses and other advanced features. The board editor is probably the weakest part at the moment. However, it is totally usable for creating PCBs which are not very complex but there is no support yet for slotted holes, slotted pads, blind buried wires, 3D view and more things. Export for Gerber files, Xlom files, pick and place, BOM and PDFs is available so all important production data can be exported. The part libraries are still a little bit incomplete but of course you can always create the missing parts on your own. Here are some PCBs created by the community. This might help you to see what Libre PCB is currently able to do. For example, this is a counter with Nixie tubes. Here we have a Raspberry Pi shield to connect some sensors, a water temperature sensor with flora one, an expansion board for single board computers or a keyboard. These projects were posted by Libre PCB users in our forum. It's always cool to see what Libre PCB is used for so if you traded a PCB with it, it would be great to share it in our forum as well. Here is the link where such pictures can be uploaded. So what are the next steps? There is no concrete timeline when to implement which feature but generally I think we need to work on the following tasks. Adding support for part numbers in the library and for assembly variants. Then more advanced PCB features to allow creating more complex PCBs. For example, arbitrary pet shapes, blank buried wires, slotted toes and so on. Support for 3D models in the library and 3D board viewer with StepExport whooping eyes and in the schematic editor hierarchical schematics and bosses should be implemented. Of course there is always room for improvements regarding the user interface and extending the part libraries. However, keep in mind that implementing all these features require a huge amount of time and since I still have an almost full time job, the time I can spend on this project is quite limited. So you should not expect these features to be implemented very soon. But of course I am working on them. So if you like Libre PCB or this tool is useful for you, I would greatly appreciate the donation. The more donations are made, the more time I can spend on Libre PCB and the more powerful it will be. And remember the easiest way to support Libre PCB is to use the integrated ordering feature which is actually very funny since this donation even doesn't cost you any money. Of course there are always other ways how to contribute, you can check out our contribution guidelines here. Thank you very much. Going on and one of the most exciting new PCB CAD tools out there and can you say a couple words maybe about where you see Libre PCB right now in the overall marketplace and where you are going to be taking this next. So thank you very much and hello everyone. Well I think Libre PCBs at least in the first step it's mainly intended for hobbyists and for private projects and maybe also for smaller companies creating only rather simple PCBs which maybe companies which don't have PCB design as a main work but also just for creating some PCBs in a few PCBs in a year. But of course for long term the idea is to also reach a little bit bigger companies but of course mainly it's for hobbyists and private use. So that's also why it is intended to be intuitive that you don't need to spend a lot of time just to learn the tool and you should create your PCB very quickly. That sounds exactly what a new user would want to know or to be able to get in. So Libre PCB is definitely I've used it a number of times definitely a user friendly interface I can see the time that you really put into helping to develop that intuitive nature. One of the amazing features I saw in this video was that integration that you've been building in with Eisler and I know because Kikad is also doing similar things I know that the first question everyone always asks is great you've got one who else can you build this out as a plug-in who else can you integrate with some of the obviously the PCB ways and the JLC PCBs are the ones lots of people are going to be interested in so can you say a couple words about that? The feature is designed in a way that the PCB manufacturers are implemented not in the application the integration is implemented only server side and so the feature within the application is the same no matter how many and which manufacturers are supported and server side and at the moment we only have Eisler implemented and I think now we just get some experience is this feature used does it work well and so on and get a little bit more experience about this new feature and of course if users are requesting it we can look into adding more PCB manufacturers. So I think that should also I mean it should be possible but there are no concrete plans and I just want to get some more experience about that first. Okay that sounds like a great project in the last minute that we have for the Q&A if people want to help they want to contribute to the Libre PCB project what are the best ways that they can go about doing that where would you like to see contributions? I think contributing to the source code requires quite some time to get into that the easier way is actually to start with documentation and with providing libraries and which are also hosted on GitHub. Excellent well thank you Urban very much for this and we're going to open this up to the hallway chat and just a minute.
A short overview about the progress and the current state of the LibrePCB project. LibrePCB is an Open-Source EDA software to design PCBs, providing the following advantages: - Cross-platform: Windows/Linux/MacOS/Others | x86/ARM - Intuitive & easy-to-use UI - Powerful library concept - Human readable file format
10.5446/56990 (DOI)
Hello, everyone. This is my third talk at FASDEM, a small audition, I guess. If you are not familiar with OpenCascade technology or CT for short, you can watch my previous talks about it. This year, I will tell you about the technical side of things. My colleague Vera will bring community-related aspects. In this talk, we cannot highlight each fix and improvements contributed to the kernel, but 7.6 introduced numerous fixes, including advances in step translator, phase maximization tool, Boolean operations, extrema, offset, IE's, etc. etc. But here we indicate some essential updates. Since 2013, 6.7 released. OpenCascade has established a yearly release cycle of the framework in response to community compliance about rare releases in the past. As timely scheduled releases, minor version bumps are not expected to emphasize specific improvements. Still, there was a new revision of OpenCascade technology, accumulating ready-to-use changes at the moment of the release, with some minor stabilization effort. This practice is close to a rolling distribution model adopted by many other projects, though some of them are also switched to year.month versioning scheme to indicate this. Actually, we do not plan to change our versioning scheme. 7.6 release adds more options to configure building processes. Now it's possible to build it without Xlib, for Linux, of course, without free type and DK. Users can take advantage of this improvement to better adapt OCT for their needs. We work hard to maintain as many platforms and compilers as possible, but supporting of two old principles plus 11 compilers restricts project evolution and makes no more sense today. So support of the Visual Studio 2008 has been finally discontinued. Scaling in top-of-the-s shape was the root cause of various defects in our modeling algorithms. Tolerances associated with geometry careers, americ vertices, adjacent faces, refer to non-scaled originals. Geometry scaling may lead to inconsistencies between formal tolerances and real ones. The result kernel now forbids assigning scaling to top-of-the-s shape objects. If one really wants to embed scaling, it can be done through B-Wrap builder API transfer class. B-Wrap and XCAP, the native formats for storing OCT geometry without conversion, now preserve vertex normal information. 7.6 can read both old and new versions of the format. Here we use the good old backward compatibility principle when a newer version of software might deal with older versions of files. Extrema package and its topological equivalent calculate extremums between geometric entities like the minimal distance between curves or projection of a point on a surface. In 7.6 we introduced memory footprint and added parallel execution. These two are relevant to B-Wrap extrema package. Support of trained curves is added for the curve-curve case of extrema package. Like in 7.5 we keep working on the progress indicator. We added two Boolean operations, set algorithm and B-Wrap extrema. OCT had two implementation of the Boolean operations. We informally call them old and new. The old implementation has fundamental defects that cannot be resolved. That is why we stopped supporting it and now getting rid of them. This release removes the API of the old Booleans preventing their accidental usage. Pollet package, receive new methods to intersect mesh with an axis and triangle with an axis. A new algorithm for accurate order independent transparencies added. We added these figures to make this talk not so boring. The left image shows no order independent transparency. The middle image shows weighted order independent transparency available in previous OCT versions. And the right image shows depth pinning implementation. Visualization now provides a simple but fast method for drawing shadows from directional light sources. We keep working on improving compatibility with embedded and web platforms. And now visualization core is covered by automated tests of OpenGL ES driver in addition to desktop OpenGL driver. We are looking forward to see more community projects using OCT as a web assembly module for modeling, data exchange and visualization. This release brings a reading support of drug-accompress GLTF files. It is the lossy algorithm giving an outstanding compression ratio for GLTF files. Also entities representing stepped kinematics are added. A writer for option format is added. Minor improvements include OSD file system class for unified and extendable C++ file stream operations. Card models are much more complex than they might look. Assemblies, insensing, transformations, nesting, references and other particularities complicate assembly level operations. One typical problem when you deal with assemblies is subassembly extraction. Now this operation is harnessed in OpenGL technology. Partial loading of a CUP document reduces the reading time by loading on the attributes of interest. In total, overhead is nearly 15% when the whole document is read. Additional details on the topic can be found in our blog post. We retrospectively versioned our previous CUP documents and added writing support when possible. This enhancement extends data interoperability between applications using different versions of a CUP documents. What we are going to do? First of all, we plan to drop support of pre-C++11 compilers. As mentioned earlier, by the way, you can participate in a poll created by my colleague Kirill related to this question. Modeling doesn't need any special mentioning because it's the very heart of the kernel. We definitely will keep working on it. In visualization, we are looking forward to simplifying its usage. At the same time, new features might be implemented. In data exchange, we really want to add thread re-entrancy in step translator. Reading of the slated geometry from step is highly demanded, but I'm not sure that it will be finished in 7.7. We know that our documentation is a weak point that could be dramatically improved and we spend noticeable efforts to make it better in each release. In 7.5, we reworked the overall structure in 7.6. We restructured samples, added a new QT-based sample, and added an advice guide. This work is far from completion, so that is why you can see it here. My previous talks were a bit generic. This time, I'd like to make it a bit more personal. Well, I'm planning to take a part in dropping support of pre-C++11 compilers, maybe on some modeling activities. Also, I was reviewing most patches related to documentation, so there is no reason to stop this practice in the future. And you know who is blame. I'm currently busy with general activities related to getting rid of unused headers, forward declarations, and friend classes in my spare time. It is small, but I hope that my Raspberry Pi 3 would build OpenCascade technology in less than 3 hours. Right now, I want to turn the floor over to Vera. She has something to say. Thank you, Alexander. My name is Vera Stabnov. I'm OpenCascade Technology Community Manager. I'm going to share with you new opportunities for OCCT users. The first ones are related to isn't access to technology. OCCT is known to be quite a complex framework, so the entry threshold is quite high. Last year, we've put some efforts to ease it. First of all, having received numerous requests from the open source community, we decided to publish free trainings to ease the access to OCCT. Free presentations cover preliminaries, geometry, and topology topics. You can find them in the training and e-learning section of OCCT Development website. Secondly, we introduced a novice guide as a part of our documentation to attract new users and help them with OCCT onboarding process. As a part of continuous documentation improvements, this year's sample section was fully restructured and organized in a more logical manner. On top of that, highlighting for code snippets was finally added throughout the documentation to make it more user-friendly and easier to work with. Last year on FOSDEM, the subject of tasting dataset was also raised. We've received requests from the users interested in launching OCCT automated testing system with more shapes in addition to basic testing options. As a result of extensive analysis, the dataset consisting of more than 2,500 shapes was published simplifying testing process for the users and contributors. With this public shapes, OCCT test coverage raised almost twice, up to around 60%. One of the major OCCT community improvements last year was new developer website launch. It brings a couple of nice features like single sign-on and forum smerging. Earlier, our users could be confused choosing the right forum to ask questions. Forum structure has been revised to improve navigation and subscription to interesting sections instead of a single section for all topics. New website also features an updated Get Involved section expanding the ways to contribute to OCCT development. In addition to the code contribution, we encourage users to share their ideas and vision on OCCT development, help to educate others by writing articles, blog posts and creating samples, contribute documentation and tutorials and just spread the word about OCCT. We've also introduced two big website sections. The first is recently launched OpenCascade Technology Project and Products Marketplace. This new section allows sharing information about OCCT-based products. Being a single access point to already more than 30 projects, it is easy to enter into technology and provides one with a wider look at OCCT capabilities and application areas. We invite you to explore the marketplace. If you work on OCCT-based project or product, we encourage you to share your experience with the community by requesting to list your project. In the future, we plan to provide the project with official OpenCascade Technology Partner status, including wider opportunities for marketing and promotion. One more new section is Research and Science Publications Listing. OpenCascade technology is actively used at academic and research levels around the world. To provide the insight into it, we'll launch a new section devoted to OCCT-based research projects and articles. It already gathers more than 600 of research and scientific works from prominent universities and organizations in more than 40 countries. If you also use OCCT in your research or scientific project, you can request us to list it in the collection to increase its visibility and share it with the community. To get an idea of OCCT-spread and research and science, you can also explore our recently published Infographics Report, which introduces various aspects of OCCT application by universities and even commercial companies around the world. Of course, we plan more exciting OCCT activities this year. We plan to implement fully digital workflow based on Tokyo Sign eSignature service to make the contribution process easier and faster. We are working on OpenCascade technology rebranding as the project is evolving. We plan to launch a regular technical blog and welcome the external project authors. Some of you already know about it and agreed to participate, so if you are interested in preparing an article for OpenCascade website to share the experience of using OCCT in your project, please contact us via the contact form. As always, we will be very glad to hear from the community. Contact us to share your thoughts, ideas for collaboration, what tell about the project you work on. Looking forward to hearing from you. We'll get the mainstream and we'll start in just a couple seconds here. I'll introduce you. Welcome. I'm here talking with two members of the OpenCascade team about the new releases and some of the features that they were discussing in their talk. I'm really enthusiastic to hear about all of the new community changes that the OpenCascade team is working on. Vera, can you say a little bit about the reception that you're getting so far and what you'd like to see develop with that going forward? Yeah, thank you, Seth. I'm community manager at OpenCascade and I think one of the major improvements were opportunities for community in 2021 was launching the marketplace for projects. Currently, it's free and everyone who is working on a project based on OCCT can join, can request to be listed there to promote Heath of Hell projects. For us, it's actually very important to have this connection with communities and with people who are actively working on implementing OCCT in their projects or products. For the next year, we also plan to launch technical block, which I'm really excited about. Currently, we have nine authors who agreed to participate in the blog, so authors from external projects. If there is anyone who wants to join or to prepare an article for OpenCascade website, please contact me. I'd be really glad to feature your article there. We also will have a rebranding of OCCT. It will not be huge, but still very interesting and some more nice features, CLA, digital process, as we mentioned earlier and so on. Stay tuned. Excellent. I saw that this you are making some very big strides in the visualization side of OpenCascade. Alexander, can you say a few more words about where you're pushing that part of the technology? Well, well, well, well. I'm not a visualization guy, but let me try to answer. Visualization, it's, I think, one of the most important things because in the broad broad, what you see is not what you have, because when you have a three-dimensional model and it is some kind of mathematical concept and without visualization, it's quite difficult to move forward and our efforts to visualization are aimed at revealing of technical things of new rep modeling, I guess. Okay. No, that makes sense. On the back end side, you've introduced WebGL as a rendering engine. Do you have any plans for additional back end engines? Conscious, Farzano, no. The when people want to contribute more to the OpenCascade project, I saw that your barriers entry are being lowered. Are there, do you still have the multi-step facts, the authorization form to OpenCascade or have you updated that to different models? So, yeah, we still have this multi-step CLA signing process, but hopefully this year, very soon we will replace it with the fully digital one so it will be much easier and faster. Excellent. And real quick, is there one feature that you're most excited about for 7.7? Alexander? Actually, I'm, I will be happy to see improvements in step translator related to thread or accuracy. Excellent. And Vera? Good question. As for community, like more activities, more contribution, more bugs reported and not only technical bugs, but maybe improvements in documentation if everyone can help. Thank you.
Open Cascade Technology (OCCT) is a framework for B-Rep modeling. The lecture presents a technical update from the previous talk (at FOSDEM 2021). This year we also introduce our OCCT's Community Manager who will highlight community-related activities that happened during 2021.
10.5446/56991 (DOI)
Hello, today I will talk about pushing the open source hardware limits with Kikat. My name is Svetan Zulov, I am from PolyMex, Bulgaria, PolyMex is a small company dealing with electronic design and manufacture of electronic devices, all products which PolyMex sells are designed and manufactured in Bulgaria, Plovdiv, and many of our products are licensed as open source hardware and certified by open source hardware association. At 10 years, already we work on small Linux computer family, which is completely open source hardware and named Linuxino. For these 10 years we experimented with more than 12 different processors, starting with Tollwinner, NXP, Rallying. Some of these processors are very popular, we still sell boards with them, like 813, 820. Some of these designs, we just made prototypes and decided to scrub because of hardware bugs or poor performance. What makes me proud is that our Ythub repository has hundreds of forks and there are thousands products made based on designs. Open source hardware is always about collaboration and feedback from the community. When your design is more accessible to people, you probably will get more feedback and more collaboration. This is why in 2015 we decided that all our open source hardware projects will be made exclusively with Kick-Up and one of our first complex Linux port computer made with Kick-Up is A64 Linuxino. During the years we learned that there is no point to hope of every newly released Chinese-SOS with poor Linux support and buggy undocumented hardware. One of our most successful products is still i20 line 2. Why? Because it has mainline Linux support thanks to Linux NC community. It has not so high performance parameters but still enough for industrial application. In industrial products people mostly value reliability, good support for the software, expanded temperature grad and especially long-term availability. When you design industrial product it's usually subject to a huge certification and people don't want to touch the designs after they pass all this certification for many years. So to have a long-term availability is very very important and this is why A20 line 2 Linux computers are so popular among industrial use. Men time people who work on free library open source software projects also notice our open source hardware computers. Why? Because this is perfect match for the open source software because the final product becomes completely open source, top software and hardware. Our first successful cooperation was with the Freedombox Foundation. We made a special Puneer Freedombox Home Server Kit for them and they maintain line 2 image on their website for direct download. One other interesting project we also support and they have special image for line 2 is GNU Health project. This project has a very noble idea to help doctors and laboratories all around the world with open source software which to help them to maintain their patient files, appointments, analysis and etc. We are very happy that we are part of this GNU Health project. Industrial applications do not require a lot of resources but servers like this necessary for Freedombox or from GNU Health and others like Home Assistant, OnCloud and etc. They need RAM, they need more cores, more performance, they need a fast internet connection and this is what A20 line 2 is lag behind. So two years ago we had been contacted by Mr. Victor Andriy Toyu from French company Ignitio asking us if it's possible to design open source hardware board but with high respect for high-end open source hardware server board. He studied the existing market and evaluated what is available and none of the solution on the market has open source hardware. Mr. Andriy Toyu offered Ignitio to cover the cost of the development for one such more powerful Linux server. So he started research to see what possible SOC candidates with good specs, proper documentation and proper Linux support exist and these requirements excluded almost everything from Chinese SOC. What we found is MXP product IMX8 Quad Max which is dedicated for automotive infotainment and has octa core processor with plenty of processing power. It supports up to 16 gigabyte RAM with 64-bit data wide and 2 gigabit ethernet and for storage it has options for SATA and for PCI Express. So this was perfect match for server application and we started this most complex board we ever designed in July 2020, finished it after 10 months in April 2021. We used a key cut for this board. This was our first choice and we never experienced any problems while we have been designing this board. The only issues was that we still cannot verify the prototypes as the semiconductor crisis hit us and we cannot source some key components. The processors and Pemix were delivered free of charge by NXP but there are still some memories and specific integral circuits which are not available on the market. So we have to wait until they become available so we can verify the prototypes. One other minor issue is that this board becomes very expensive because of the specs used. Manta and another French company NXID managed to secure the financing for the development of our next less expensive IMX8 board which is IMX8 Quad Plus. If you see it has a still Quad Core Cortex AE53 plus Cortex M7 core processor and what is interesting here it has also 2.3 Tops NPU which allow machine learning. This board can address up to 6 GB of DDR4 RAM and it has PCI Express, USB3 and Gigabit interface with TSN. Also it is available in industrial grade. We also did the design of this complex board with kickup and it took us only 5 months this time because the processor was less complex. You can see the final result here. This board is booting and everything is working from the very first tray. And this board is made with kickup. Everything, the small board and the motherboard. So if I have to make a summary there is a demand for high-end open source hardware servers which to guarantee security and privacy on all levels and the kickup development and community is flourishing right now. It's already in mature cat tool and allow designing from simple to very complex boards. If you have any questions I will be available on the chat to answer them. Thank you. Welcome. I am speaking with the head of Olamex Limited. Thank you so much for the talk. This was really interesting for people who use key cad and other open source tools in their professional development. I think one of the questions that came out of this was how do you design your workflow? How is your when you have a project come in with a new board like we were describing in the talk, what does your process look like to move from your existing libraries into a developed product like the board you were showing? Well, thanks for inviting me here. I appreciate to be here. What is our workflow? First we study the data sheets and we try to see if there are some inconsistencies in them or some things which are not clear. Then we contact the semiconductor vendor asking for explanations. Sometimes there are errors in the data sheets. So this is where we start. We first make sure that the information we have is correct. Sometimes we compare with the reference designs they publish and if we saw some differences we start to ask and clarify which one is correct because there are always some little glitches in the documentation or in the schematic. Some people didn't update them on time. They know about this issue but it is not in the documentation. This is when we start because otherwise you have to make double work. After we clear all the non-clear stuff in the documentation we start with the component creation. We make a component. Usually this is done by two people. One is creating a second is checking if everything is correct because when you check yourself you often make you often omit problems and errors. So definitely you need two different guys which want to do the stuff. Second one to check them or vice versa. Then to switch the roles and then second to continue and the first two. After we know that the components are created correctly we start the schematic. We usually check what is available as hardware designs like references and etc. But we try to see if we can optimize the design. What does this mean? We want to make it designed for manufacturing for manufacturability using our own existing technology. We have more than 600 different products and more than maybe 6000 different components. When we have to choose for DC-DC converter for instance first we search in our stock. If we have something which can do the job because there is no reason to keep 100 different DC-DC converter which probably do the same job. So we try to reuse what we have in stock. And this is how we build the schematics. We don't follow always exactly the vendor reference design. We try to adapt it for our stocking components and for our working process. So later to be easy to manufacture this board in our own factory. Every factory has their own setups, their own rules. We for instance know that some machines have limited number of feeders. So we have to choose components which to feed in these feeders. Sometimes we have for instance 3.3 kilo ohm, 4.7 kilo ohm, 6.8 kilo ohm. But we just put one value because we will use one feeder. And so sometimes the schematics are not just following the electrical rules. Sometimes we just try to optimize the process to make it with minimum change over all the feeders with minimum number of different feeders so the assembly machine can manufacture this board later faster. And once the schematic is already done then we start to choose the proper component location on the PCB. This usually depends on how fast the signals are, what is the arrangement. We will put this board in box where the connectors has to go on which site and etc. And later on we start the PCB layout. So this is more or less the workflow for one board. And do you build all of, you mentioned you're optimizing for your manufacturing process. Do you manufacture them in-house? Yes, absolutely. Everything is manufactured through the mix. We have control on every process, on every step of the production. Do you find that you're able to identify and correct more issues by having that in-house production capability? Absolutely, absolutely. Sometimes we take external orders and this is always hell for our production because they use very odd components. They don't know our limitation and we cannot build easy and cheap boards which are not designed by us. Knowing our assembly line or technology limitation and etc. That makes sense. People who are not familiar with Olamex yet and there might be one or two out there, they might be surprised to hear that you have more than 600 different products that you have designed and built. How many of them do you still have in active production? Or I miss to produce every of these boards if there is demand. So even boards which are very old, 10 years old, 12 years old. But if there is demand and we have the components, we always produce them. Many of our customers appreciate this because we have customers which use very old ARM 9 devices, which they put in certified industrial products. And to recertify this product costs a lot of money. So they use absolute technology and I told them this board is too expensive to manufacture. This processor costs 8 euro. You can find a processor for 2 euro with more capability. But they say we don't want to touch this design. It works. Don't change anything. And that is a great way to end.
The talk will cover the design of very complex and powerful OSHW Linux boards based on the new NXP iMX8 and ST STMP1 SOC with FLOSS KiCAD tools and then running FLOSS software for cloud, IoT and Health to offer full transparency to the people who value their privacy not only on software but also on hardware level.
10.5446/56992 (DOI)
you Good day, good morning, good afternoon wherever you are. My name is Blom Vogler and today I'll be discussing continuous integration pipelines using Nome and Fulton Jenkins with you. So let's have the stage. It's Monday morning, it's standard timing, your favorite intern proudly announces that on Friday he solves the problem with our main money making product by fixing the data connection. You're probably not alone when you have the same exact reaction as I had saying, wait what, did we have a problem? Was it the database? Why didn't you call me? So when I opened up the code that this intern committed, it looked a little bit like this, it's actually a bit less complicated than what we're showing here, but it didn't tell us pretty much, it wasn't very obvious what was the problem, why it was fixed and how it was fixed. So of course the initial gut feeling is to say, we don't deploy on Fridays, you should have come to us, you should have waited for us, but that's actually a problem. Because what as a business is required of us is that we get stuff in front of our users as soon as possible because that's how we make money. So we want a faster time to market. We want features in front of our users in hours or minutes instead of years or twice a year. And we want to do it in a safe and reliable fashion. So we want to have a fully automated process that will tell us if we do stupid things. So that way we'll have happier customers and investors because they see that we listen to them and that we announce new features very quickly and very regularly. And our developers and managers are happy because we can do it in a reliable, secure fashion. So we have confidence in what's going on in life. So for that I would like to introduce Jenkins. Jenkins is an open source automation server. It's been around since 2005 when I was still called Hudson. And ever since the lead developers for the project in 2011 is called Jenkins and to me it's the golden standard for testing. It's written in Java and the core system doesn't actually do too much but it has a very extensive plug-in ecosystem which pretty much allows you to do anything you want with the system. But using CI CD has an interesting problem. You don't actually make too much money with it. If we scale it too big, actually it's not doing too much if we scale it too small, it's actually stopping our workflow so our developers will have to wait for new capacity to come online. So actually you want something cloudish and for that Jenkins has a cloud plug-in system that can automatically scale workloads. And for this I want to introduce Nomad. Nomad is an open source workload scheduler created by Hashicorp. It can do batch work, it can do containerized workloads and it can do non-containerized workload. But today we're discussing the way of the Docker. It has native and service discovery integration via a console product and secret management via Hashicorp Vault which we're going to see a bit later today. It has RBEC built-in and the jobs can be defined as in HDL which most of the Hashicorp products are using. And it has very nice Jenkins integration via a plug-in which has been shown some love of late and it's now fully functional and exactly what I wanted for my workloads. So let's have a look at how that works. In our case let's first have a look at the Jenkins job. Let's make our screen as big as possible. The job in my case is called QA. I'm going to make sure it runs in my lab and it's going to be deployed into a namespace called deploy which in my lab equates to an environment called deploy and it's going to use a type service. There's a type batch which we're going to see in a minute which is used for finite tasks that have a fixed amount of work to do and then exit. And there is a system job which is something that runs on every Nomad node and those are useful for things like log aggregation. But in this case we're joining a service which is something we want. We want one of and it's going to run until basically Jenkins gets sucked. We're going to stick it on a volume which is for the people that know Kubernetes is similar to a persistent volume claim and because we want to store jobs, configuration, things in Jenkins and then we want to have that information survive research at the Jenkins job. We're going to expose two ports. One the HTTP port where it was going to be the UI and there's something, another port JNLP where our workers are going to contact our Jenkins server. It's going to have a little service so we're going to announce it into our service discovery tools so that our H router can pick it up and this is how I set it up. It's all dynamic. It looks at my service discovery and any service that has the appropriate tag in this case traffic.enable equals true. It's going to be exposed and I can access it by my web browser. There's a little check involved to make sure that the service is healthy. So it's going to look for the slash login URI at the Jenkins servers to see if that will give us an HTTP 200. If we'll skip these two tasks, we'll come back to those in a second because first I want to introduce the job itself. There's a task called Jenkins. It's going to be a type. It's going to be Docker. So we're going to containerize everything and it's going to use the upstream LTS image for Jenkins. We're going to use the two ports that we've described before to run. We're going to make sure that we'll enter a config file in a minute that will be mapped into the container so we don't have to manually click around. Speaking of the clicking around, Jenkins is an awesome system. But as I said, it's fairly bare out of the box so you'll have to configure it automatically. As standard, Jenkins will start within a wizard where you get prompted to make a certain set of decisions and then you'll have a running Jenkins. In this case, I do not want this. I want to set up Jenkins completely automatic because when I build my applications, I was completely automatic. So I'll have an UQA system. I'll know how it works and know where it ends up. I want to have the same amount of structure in the system that builds the application that I trust. So I need to have an equal amount of trust in the system underlying my build system. So I also want to have a fully automated Jenkins setup. This is possible. The lovely people at Jenkins came up with a system. Well, actually, it's a plugin. So clicking that does Jenkins as code. So you can basically define pretty much anything you want, any configuration that is possible in Jenkins, you can turn it into configuration as code. We'll have to do a little bit of YAML programming because that's the format the plugin uses. In this case, it's fairly simple. We'll be working pretty much in either the names of Jenkins or in Unclassified and this case will be showing you that the protocols that the agents are allowed to connect with the system will set the number of executors on the master to zero so we can only work on scaling out infrastructure. This brings us to how do we get these plugins into Jenkins then. That requires these little two steps that I showed before. The first one is Jenkins is very particular about ownership of the files in its project directory. So we need to make sure that inside the container the files that for the Jenkins home are owned by Jenkins. So we're going to run this task. In this case, I'm selecting, I'm adding a little resource hook that says this is a pre-start task and basically actually this is somewhat of a batch job. So it's going to run, it's going to exit and then it's going to continue with the other tasks. The second task is the plugin installer itself. You used to be only do it in comfortably in the web UI where you had to click stuff together and Jenkins would under the hood build out the dependency trees and it would install all of those. But if you wanted to automate it, that was actually quite hard. What you had to do is work on which plugins you wanted, what all the dependencies are where the dependencies, all the dependencies, you would have to all write it down. It changes a lot and then you would have to find a script to always to download them all, get them in the right position on disk and then your Jenkins would be happy. But luckily enough for us, there's now a lovely CLI tool that does all this work for us. It's called Jenkins plugins CLI and you only have to pass it a list of plugins you top level plugins you want to install and then point it towards the directory where you wanted to install them. And in this case, it's again in this Jenkins home volume that we've defined. There's also a pre start. So it's going to do our it's thing and then it's going to install them there. And after that Jenkins can start as regular. So let's see what that looks like. So since I am incredibly lazy slash forgetful, I have terraformed all of this. And in this case, it's just a simple terraformed job. And TA is my lazy and AES for terraformed apply. More things that we're going to get around to in a minute. So let's have a look if that's actually running already. So here you see there's a there was a job of running that called images and that's my Docker registry. It's not running on the edge device, which probably should have done but here we are so the are my container, my image registry Docker image registry is already running. The second job QA got started. We'll see we want we want to have one bonus placed one is healthy and we can drill down into the allocation. So we have a group which currently one application which is fine. Nomad already shows you a little bit of the of monitoring setup. And here we'll see the task that we defined before so proper disk so turning it second one was installing the plugins and as they displayed here it looks scary right stay dead but the exit code was there but if so if you remember it's only a batch job so this is actually the desired outcome that we're after. In the case you already see Nomad signing some dynamic port from quite a lot subset which and we'll see that our services that also the service is announced and the appropriate tackle set and currently. Everything seems to be fine. So let's have a look. Yes, so it's been announced into Jenkins. We see that traffic has picked it up. So let's have a look. Yes. So here's Jenkins. It's been fully configured as you see there's no build executors. And surprisingly that's also a first little job already. That makes sense because that's how I wanted it. So let's go back to the configuration we had. And let's look at a little bit of conversion I skipped over. So not only do I want to automatically install my plugins not only do I want to automatically install my configuration. I also want to automatically configure my jobs since it's actually the jobs are incredibly complex. I shouldn't say complex is because it's a complex combination of configuration plugins and actually what I wanted to do. Before in the good old days Jenkins we had to click around a lot to get these jobs into place which also resulted in a lot of job drift of come up the duration of the job. So I believe the lovely people at Netflix wrote a plugin again called the job itself plugin which allowed us to define jobs in Groovy. So here we have one job defined as my seed job which is an it's going to actually what this is my seed job is going to do is going to build out all my other Jenkins jobs. This sounds a bit double or complex but actually that makes sense. So I pulled out the jobs from the building of Jenkins itself because it's an artifact that has a different should have a different recycle. So it should own or should live in its own repository and the other benefit is that if I keep adding jobs here the file is going to be extremely long. Hard to read mistakes are going to be made but also every time I change this file the Jenkins service is going to get restarted which might annoy other people in your organization. So let's have a look at the seed job. And here actually let's actually start the seed job first because we will see that it's going to take a while the first time. So it's going to get scheduled but there is no workers. And that's actually, oh, it's actually starting quite quickly. Okay, so let's actually have a look what's happening there. So if we go back to our configuration again. We'll skip to the last section I didn't introduce. So in order with the Jenkins has set up a system where if your car have plugins that announce themselves as cloud systems. And in this case, as I said, no mad plugin has seen a lot of love off late and actually you can basically pass Jenkins enough configuration, not even configuration. You pass it and complete no mad jobs. So we have no mad job that builds out has in itself a no mad job. And in this case, the plugin is very sensitive. It's, as you see, you need quite a bit of configuration. But if you set up a no mad job, it actually proposes a very solid default set of options. And there's only a couple I needed to need to change. In my case, I want to make sure that it ends up in my lab data center again. And I want to have it in a namespace called practice because that's where my that's my environment where I do my experiments. That's where QA is allowed to happen. And that's why I don't mind when I see a lot of flux in jobs. And that's another thing, one other thing that I added that's going to be important in a minute when we actually start deploying no mad jobs from Jenkins. And so I inject the no mad address into the environment of the container. So that mine any tool that wants to connect to nomad actually come use this environmental variable to connect to nomad. Let's see how far we are. Yes, so the it's actually, as you saw it, no mad had worked out. Jenkins worked out that it did not have capacity for to start building and it requested nomad to build itself a Jenkins job. Now, not in Jenkins requested nomad to run a job. That included the worker based on the configuration that we just passed. So in this case, it has ended up in the namespace practice and it for us, it is actually bad shop. So it has a finite amount of time it's allowed to live. So because in QA system, you want to mostly end up with a clean workspace and what is clean and then a fresh docker instance. In this case, for the sake of this exercise, I allow my workers to be active for 10 minutes of an activity so that we can keep going with this exercise. So let's have a look at the jobs itself. This example jobs, the one I am I, we're going to ignore for today, but it's basically that's my my little task job to see if the basic integrations work and it's only going to echo what user Jenkins is running. But the one we're going to investigate a little bit more today is how the world. Let's have a look at that one. So here's the one my again, as I said, doesn't it does very little other than echoing out who am I. But the Hello World is the most interesting one. And that's gonna, we're going to do two things there. First of all, we're going to introduce yet another version of building pipelines in Jenkins, which is the Jenkins pipeline setup. Yeah, again, it's another set of plugins, but it actually allows you to write plugins in a more declarative way. And in this case, we'll have a pipeline is gonna needs to have an agent worker that has the label nomad. And that's how we connect the two, the Jenkins to a moment. And this is how Jenkins work out how much capacity it needs to have and how many agents it needs to start. Then we'll have roughly three stages to build up one is to prepare. Then we're going to deploy into my theory environment. That's the acceptance. So will it if it works in theory, then I'm confident that we're gonna we can go production. And that's going to be the last step. And for that, I want to introduce a new tool nomad pack. Nomad pack came about fairly recently. It's still in beta at HashiCorp. And it's sort of the equivalent, the nomad equivalent of Helm jobs. So you can define have a well defined task or job for services. And there's a central registry you can use or you can build your own. So the reason why the lovely people at HashiCorp decided to go this route was the following. As we've seen in the example of how to build a Jenkins HCL is very good at defining how a job should look like. But this is not very good at making stuff variable. So if I want to inject say a different amount of running instances at an acceptance or in production, that's actually quite hard to do in a Jenkins job. There's a war or there are a couple of options there. One is parameterizing your nomad job. But that's always a bit clunky. And the other tool is something called LeFont. It's an open source tool. It was developed by an external contributor, James Roussel, who of late started working for HashiCorp and LeFont is now an official HashiCorp product. But it hasn't seen much development in the last period. And that's because HashiCorp has been working on nomad pack. And that's the job we're going to build out today. In this case, I'm actually, I've forked the official example registry and I've updated the Hello World example in there. And in a pack registry, you'll see a couple of files that would be directory called packs. And every pack you want to write will have its own directory. And in that directory, there needs to be a couple of things. You're going to have a variables files, the one that is open right here. And you'll make anything variable that we want. The Hello World is very verbose so you can tweak around. So the only thing I really changed from upstream is that I always force it into my lab data center. Because that's all I have. And if we look at that at the job itself, it's actually, they turned it into a template system. It is go lang template engine. So anything you can do in a go lang template you can do here, which is great. So here we'll make things very flexible and then you can pass nomad pack on the command line, the variables you want to change. So in this case, data centers are variable, the namespace is variable, my kind is variable. It's going to build out, it's going to announce itself as a servers into my console, because that's great. It's going to start a very simple Hello World image. And it's going to display a Hello World message that we pass into our system. So let's have a look how that works. Let's start our Hello World job. We can click into it. We'll see it starting. And actually let's open the Blue Ocean version. The Blue Ocean, as you see, is a very cool, new and newish way of living at Jenkins jobs. The old interface has been around for a while. And especially visualizing how pipelines work is actually quite hard. Well, not very pretty, so Blue Ocean is a way of trying to see what's going on. So we've seen our three steps, our three stages. The first two have automatically been executed. So my Nomad pack has registered with my private, that's actually public, but my personal pack registry. Actually, the image I built, which we can have a look at, is very simple. It's an extension from the official Jenkins Inbound agent that has everything built in that you need to connect any cloud agents to Jenkins. And then I've added a little layer where we download the Nomad pack. And we've already registered with this registry. And why do we connect again? It's the fact that it should have automatically updated, but it doesn't. And here, this is actually what's happening right now. This is what is in our configuration script. It's we have no impact to run the Hello World example for my personal registry. And I want to pass it a couple of variables in this case. I want to make sure that it ends up in my namespace theory. So in my acceptance environment, and I want to console, well, I want Nomad to announce to console that this particular product, well, Job is announcing itself under the theory service. In this case, if we have a look at console, we see that it's already nicely announcing itself. It's been registered by Nomad. It has two instances. And again, we enabled the traffic, exposing it by traffic. So let's have a look at what that looks like. It's where theory.lab will see Hello World. Of course, it's not the most exciting example. So let's check out the second stage. Yeah, the third stage, I should say, but the second really deploy stage. So I want to actually show two things here, that I can pass a variable called message with another variable, but I also want to see that show that Nomad can very nicely integrate with Vault. So what I did is extended the job with a call into the Vault plugin and request a credential called supersecret and then inject it into my little stage as the not very imaginatively variable supersecret. And that we're going to pass into our image as a supersecret. And as we show here, it's not something you should do, but this is as an example that we could, but probably shouldn't ever. So for that, we need to have a little adjustment to our Nomad file. Actually, it's here, I cheekily enabled it already, but because that Vault integration brings us a problem. Vault also uses a token system. So of course, it's a secret management system, so it would be bad if you could easily access our secret. So we need to present a password or two-fold, we get a token and this token can be used to access secrets. So instead of passing Jenkins my credentials, we need to give Nomad a Vault token that can get to secrets. So in this normal system, I would now have to hard code this token into my Jenkins job. And of course, you see this would get end-open gates, so the secrets wouldn't be that secret anymore. But luckily, Nomad has very cool Vault integration. So what I do is inject the token I've obtained into the environment for the running Nomad before starting. And then in the Nomad job for Jenkins, I actually get a new resource blob for Vault that says here on the screen, use a certain Vault policy and the policies are the way of Vault to limit the powers of the token. So in this case, I only have Vault mount for Jenkins specifically, so any secret that is needed by Jenkins is in its own little mount, which is a subsection we can manipulate. In that mount, I stick my super secret and the value is, again, not very imaginative, Fubar Barquil. I limit the powers of the token we're going to generate by limiting this Jenkins, any token is created by the Jenkins policy to only the secrets that are available in Jenkins mount. And I can only read them. So we no longer have to hard code the token, but Nomad is going to generate the token based on the Jenkins policy and it's going to inject it into the running container. So there we store no secrets, we store no secrets and now we need to tear from this again. And here we see the full items we're looking for that have been generated in Vault. So we've seen we created the Jenkins policy and now in the secrets we have a KV store that has our super secrets in there. The reason why I want to use Vault instead of Jenkins is the fact that Jenkins, you can perfectly inject credentials themselves, but again, it's a manual job and I don't want to do manual jobs because they're hard to reproduce. But also if I want to build out more than one Jenkins, I would have to inject this, create a credential everywhere and make sure that it's in sync, I have to do a manual LCM on the passwords and when I use Vault, this is all automatically done for me. So it's there and this got exposed to Jenkins using the Vault plugin. So if we have a look at how that's configured, it's actually fairly simple. So we basically in the unclassified section, we say we want to connect to Vault, we're going to use the Vault token that got injected and we're going to connect to our Vault service that we pick up using its console server discovery. And then we're going to add two credentials. So we've got to inject the token that we've created from the environment that. So if you remember, no method to token, it got injected into the Jenkins container and we're going to pull it out of this environment and we're going to create the credential to token. So Jenkins can connect and then we're going to build out this super secret secret as well as a credential. And instead of hard coding in there, it's just going to point to the Jenkins secret mount. So if we run this job now, yeah, well if you continue deploying, we're going to deploy to reality. And it's actually very quick. So we've seen the deploy. So it's another Nomad run and we're going to inject this as a message. And here the people that wrote the plugin actually did a lot of work for us. So they make sure that the passwords that we, the secrets that we pick up from Vault gets obfuscated in the log output. So we're protected there from any exposure. So let's have a look. Here it's already running. So we see the theory, certain job that we created in the previous step and now the new stall job that is running in the reality name space. So it's in that way I split out jobs with the same name into different namespaces and they can all run happily in the same cluster. Let's see if it's live already. Yes, so here we are. And as it says on the screen, please don't do this at home. This is a pure academic example. And that brings us to the end. I think I hope I proved to you that we can deploy Jenkins on Nomad and we can automatically install our plugins, configure our running Jenkins, and we can automatically scale our Jenkins agents onto Nomad. We automatically, well, we created our jobs from script or from code. And we deployed our little test Hello World application from our own registry all the way to production using a secret phone vault. So my name is Bram Prohler. If you want to have a longer conversation, I'll be around in the chat room. You can reach me also via email or if you want to rent on Twitter, you can tag me as at attachment genie. And by the time it is broadcasted this slides. That should also be available on Slideshare. You showed us a lot of tools. I mean, the first thing in the title is Jenkins. So everyone was thinking Jenkins, but I think you are a big hardcore fan, right? Yes, I mean, no way shake before affiliated but I'm a big fan. Okay. Okay, so we saw today console Nomad and vault in action. And on the other side we had Jenkins and what was the plugin to visualizing the pipelines. That's the blue ocean. Okay. And let me see if we have any questions in the talk. Right now. Not. I was wondering, I was, I was before I was using vault but never console but I was, I was wondering if you, why you have chosen all these tools only because you are a hashikop fan, or have you evaluated also other tools? Yeah, you know, the big elephant in the room, of course, is Kubernetes. Who hasn't used it. I'm still, I've used it, but I'm still scary about the scale and the demand of moving parts. And then when you select Nomad, then console and vault are basically packaged in. Well, not packaged in. Yes, I have to run them, but they're, they're so easily integrated by hashikop already that is, it's a shame not to use them. Right. And of course vault could have been used with Kubernetes, you can even use console with Kubernetes as its layer, as it's the service mesh that was built in. It's built on Envoy, which as Kubernetes uses for the ROW, is fairly commonly used in the Kubernetes world as well. Your whole setup is running in Kubernetes, right? No. No, it's all Nomad. Yeah. My stuff is all Nomad. Okay, get it. It's still Docker with Nomad. And service discovery is, is console. And the secrets we've seen is vault and then a little bit of Terraform source to get it all. Not for running. Got it. And I was wondering, because I have no hashikop background, but Jenkins, everyone knows Jenkins. Why have you used Jenkins here in this demo? Have you tried to use a different tool? I mean, would it be possible to run it on GitLab CI? So I've used Jenkins because that's my, my stable. But also I know people have tried running GitLab runners on Nomad. I'm personally not entirely sure what the state is, but it should be possible. For instance, use GitLab runners on Nomad to have the same auto scaling feature that will grow or shrink with whatever need you have at that time. Okay. So I'm going to do my homework and look more into Nomad. I was not, was always like skipping it. So that's really interesting. Same for me. So it was the last tool that I started using and then I became a big fan. It's like, yes, this is the simplicity I'm looking for. That's true. I think we have also talked today about it. Kubernetes should get more simpler. That's an issue. Thank you for your time. Thank you for your talk and see you probably next year. And the channel is going to be now open. So if there are any questions, we can continue the discussions in the chat. Okay. Looking forward to it. Thanks for watching. Bye bye.
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably. This talk will focus on on setting up a CICD pipeline using Jenkins. We start by configuring Jenkins to use our Nomad platform to autoscale job runners. After which we ll look at using the newly released nomad-pack tool to convert, deploy and test and existing nomad job.
10.5446/56993 (DOI)
Hi, I'm Moigu and I will present you how to improve the developer experience in GitLab and how to automate the do-it-works to focus on development. This presentation is more a use case presentation of how we use Eptapod within Logilab. Eptapod is a friendly fork of GitLab with mercariable support. During this presentation, I will first present Logilab and how our code is organized and what are the problems we are facing within Logilab. What are the existing solutions inside GitLab and GitLab? The main part will be on the tools that we design and are open sourced. We use internally to have some reviewer and some short definition. It really is something on the internet and how to update dependencies. First, Logilab is a small IT company, around 25 people in France. It's focused on scientific computing, web semantics and Python training. We do a lot of custom development for clients. We work almost only with open source software and we contribute with code or money on the project that we are using. Here are some examples. On the left are some associations in France. On the right, we contribute with money and also some examples that we contribute with code. As we are developing applications for our clients, we use something called QBQab, which is a semantics web application where a framework is developed in Python inside Logilab. You can think of it as an alternative to Django. It has an explicit data model and also a lot of components that can reuse from an application to another one. We call it Qubes. There is a lot of framework. Qubes is front agnostic. You can have your back end written in Qubes, but your front page is written inside the React application or whatever you want. Our text tag is composed of HTML, JavaScript, CSS. For the front, front end, Python for the back end with Qubes. Postgres called for database service. And eventually, everything is deployed with Docker. Inside Logilab, we have a lot of repository. We are using Mercurial, which is an alternative to Git written in Python. In order to use Mercurial, we can't use Mercurial with GitLab. We use Eptapad, which is a friendly product as I stated earlier. We have a multi-repository approach. We have one repo for Qubes, around 200 repository for public Qubes, some private Qubes for the client project, some importers projects that are not related to Qubes and also some internal repository. So we have a lot of repository to take care of. And all of them are hosted on Eptapad. If you have only one repository, you could think that keeping the repo clean and updated is simple and having several hundred repository is hard to have everything clean and updated. But I tend to disagree with that. I think that both situations are hard, because the only difference is that the problem occurs faster when you have a lot of repository. And the problem that I'm describing, I'm facing within Logilab is the first one is easy having a CI. So test and link, link, run every on every commit. Having a reviewer on every request, and so you don't have a request lying around for years, have a Korean code base so that you have a high standard code everywhere. And having your CI configured the right way across all the repository, having everything released properly and tagged on Pipeye for cases, because we are mainly on Python. But also as we use Docker, having everything correctly tagged, updated and properly set up. If you come from GitHub, with GitHub Action and Probot, you can have auto assign, the pandabot, release the drifter to do a lot of things. If you come from GitLab, you have the danger bots. You can do some of the work, such as the reviewer who let to have someone assign as a reviewer, or some information of your merge request. But that didn't fit our use case well, and we made our own tools. And what I will do now is present you, rely on a use case example, so when a new feature is introduced inside Kubiquab, and how all the tools we designed help us to review the merge request, make sure the tests are passed, the dependencies are updated, and everything is published correctly. So for example, you have a merge request on the project, so on Kubiquab. A schedule job of assign bots will assign a reviewer based on user preference. In contrast to reviewer who let our assign bot from GitHub, you assign our assign bot, rely on group membership, but also on some user preference. So that if I can say, okay, I want to focus on reviewing a lot of works this week, I can say that I will review only like 15 merge requests during the week and three per day, and then change those values in time. And so that all every person does not have the same workload on merge request reviewing. Also, assign bot will take care of kindly remind you to do your work on the merge request and put in a comment if the merge request is inactive for a week. Then when you have a reviewer, the first thing that has to do is make sure the CI is green before anything else. And having a right job is hard to keep it up to date. So what we do is have a GitLab CI template. GitLab CI template is a project, an app.tox project inside of our app.tapod instance, and it defines a lot of jobs. And within a project, we will use those jobs to declare the stage and the jobs that we need. And this will help us reduce the CI load, because all the jobs will be customized. And so we make sure to have the right job definition. So here there is an example of GitLab CI. So we have Pi 3 and some Docker. We'll build some Docker image, but also we can have some extensions. So even though we have some shared job definition, we can always customize a job or create a new state or create a new job. And here is an example of the Pi 3 job. There are a few things to notice. First, it shows a customized Docker image with all the dependency already installed. So we don't waste CI time to install everything. It also stores some artifacts. In our case, all the deprecation running that occurs during the test. Among those things. When you have your test screen, one thing that you may want is to test your application. And what you can do is build Docker image. So when you have your magic rest accepted, we define another job inside GitLab CI template that builds Docker images. And what's present here is inside the image build latest job, we are using some specific tags or latest and also the hash of the commit. And we do this only if we are on the default branch. So if you are in GitLab, it should be master. But if the CI is running on a tag inside the repository, it's not the same tag to apply to the Docker image. So here we are using the CI commit tag. And we do this only if we are on a tag. Then when the magic rest is accepted, everything is published. At some point, you want to release everything. And in our case, to pipy. So that you are using release new. Release new is a small example of how it behaves when you run it. It detects a new version number that you will use for the next version and updates the code. And we do this by using the semantic versioning of all the commits between this version and the last one. And we generate a change log from the commit message. Always let you modify it if you want. Update the packaging information. So in Python, the manifest. And after all that, it try to commit and tag it correctly. And as we are using GitLab, the only thing you have to do is push everything to your repo. Because we have another job from GitLab, saying, hey, if you are on a tag, something you want to publish, then run this specific job that will itself publish to PyPy. Now that you have your, and it's everything released, you want to update all the dependencies. And for that, we use kube.doctor. kube.doctor can update the repository for various reasons. The first one is pretty simple. It just rebays all the made requests in a project to make sure that we can safely review the work and manage it. And we know if we have some conflict, there is some conflict, and we have to do some manual rebays. But also, it creates a made request to update the package version. So here on the right part, you have an example of a made request created by kube.doctor to update kube.doctor to the next version. Because at some point, we release a new version, and inside this repo, it updates the code to update the dependencies automatically. So as a reviewer, as a maintainer of a repo, we have made requests to update everything. And this here will be green, and I will just have to click on the on the made button, and everything will be updated. But also, kube.doctor will take care of some to create a made request for some linked configuration. And also, once you can also create some rules to automatically refactor all your base on the repo. So for example, as you saw earlier, we take care to collect all the deprecation warning from the test. We collect them, and we can analyze it to understand what are the deprecation warning occurring, writing a new rule inside kube.doctor to update the code, and then having kube.doctor creating all the made request needed to fix the deprecation warning. And with that, let me conclude. So the main part is within Lojla, we are using Eptapod, and this is a great tool with a lot of features. The main one such as the schedule job to have something run every hour, every week, from a thing about to kube.doctor to everything. We are using the registry a lot. So, the image registry with the code is currently tagged, but also some package, Python package registry, and we rely a lot on shared job definition. Having a good CI helps a lot to keep the clean update. However, having a good CI needs a lot of manual effort, at least in our case. And we end up writing some tools, so the tools that I presented you. So, I think I have some reviewer assigned automatically for a project, and make sure to not fluid someone with tons of magic requests every week. We use our own version of GitLab CI so we have one place, one single source of truth to create our jobs, and we can update it to have everything, have all to handcraft all the jobs and making sure to take full advantages of the CI. So, having custom images, artifacts, everything. We rely on release new to easily create a new version on the repository. So, having always the same commit message to on the new version, creating a change log, making sure the packaging is good, is right, and then GitLab compiles again to actually do the work to release to the Internet. And then when we have to update all the repository to update the code itself, we use Cubector with some rules that we write ourselves. And that helps us to have all our repository, several hundred of repository, always clean and that thank you for your attention and see you in the question phase. Thank you for your talk. I'm looking if we have any questions in the channel. Seems there are no questions here in the channel. But after looking on your presentation, I was thinking how you keep track of all the merge requests, I mean, because you automated pretty well everything, and how you keep track to see that the merge requests are not piling up or if everything works fine. So we did have a small period of time when we were having a lot of merge requests and we're not making sure everything was merged and reviewed. And we ended up having custom dashboard, so the dashboard, small static pages made with pages, features inside GitLab. So in GitLab you can have on every commit, rebuild the dashboard and see all the merge requests that are open, closed, ready to be merged because the CI is green. So we have this small web page that gather all the information. It's a central web page or is it for each merge request? No, it's a global one. So we have a project dedicated to gather all the information. So every hour I think it runs on schedule jobs and it poke around with the API, the GitLab API to see how many merge requests there are if they're open, closed, and everything. Cool. I'm looking at the channel if we have any questions. No questions at all. Then let me think, after representation you said you have GitLab templates. And those templates are like, what type of templates do you have even? I guess you have a library of templates, right? Yeah, we have several templates, mainly of three kind of templates. One based on the LinkedIn problem, making sure that we have the link correctly configured for each project, for Python, we could have some for JavaScript project, some to have the test correctly configured. So in Python 3, making sure the test could run concurrently, stores the artifact of the test. And the one related to release everything, so release to bypy, release to Docker, the image registry inside the GitLab also on Docker directly. Okay. We have one question in the chat. I was also wondering how you keep track, I mean, you mentioned you forked GitLab. I mean, it's a big tool. And how you managed to keep upstream? How you managed the... So Eptapod, the friendly fork of GitLab is not managed by us. It's a small company called Octobus. I have checked, I will send you the link just after. So it's a dedicated company that does this for us. So it has... We send money to them and say, hey, please make sure that you follow GitLab upstream and do not wait too long, wait just before updating anything.
Logilab has been using heptapod, a GitLab fork with mercurial support, for 2 years now. We are maintaining the open source software CubicWeb and its components called cubes. Thus, the code is split in dozens of repositories, depending on one another. Over the years, it has become hard to maintain code quality and good practices in the whole codebase. In this talk, we will present the tools that helped us. Some of them are mercurial specific, but most could be used in GitLab. - Create Merge Request automatically across repositories based on some rules, such as deprecation warnings. - Pick a reviewer for Merge requests based on the developers' preferences - Making sure to commit, tag, update the changelog, publish to PyPi when releasing a new version - Mutualize GitLab CI configurations with templates - Host docker images of your project on the forge - Have up-to-date static websites, documentation or web applications Each use case can be solved easily, but combining them is what truly makes developer life easier.
10.5446/56999 (DOI)
Hello and welcome to this talk about the Debian conference infrastructure. My name is Kyle. I've been a part of the Debian video team for about six years since 2016. In this talk we'll be looking a bit more about how the video team streams and records the Debian conferences that happen throughout the year. So let's dive in. Just kind of a brief overview of what Debian is. It's the annual Debian conference attended by Debian developers and Debian contributors, even just interested parties every year. It happens in a different country. So the last in-person one was 2019 in Curitiba, Brazil. It changes around different continents, different countries. In 2020 and 2021 the conference was online and we had to develop a new set of systems to manage that. The Debian video team is responsible for recording and streaming all the talks in the rooms that we're covering. So we record each talk. It then gets reviewed and uploaded so that it can be viewed afterwards as well as streaming in real time during the events of people that are remote, can attend and listen to the talks. We also cover other Debian events. So many Debian conferences that we can get to, we cover as well. Generally we cover the ones in Europe because our hardware is based in Europe. So getting to the Europe ones are easier than having to ship that through various other international places. So in in-person DebCup we have generally three recorded rooms, two talk rooms and a bathroom. So of those we cover the majority of the talks in the conference. We also have a video team LAN that we run across those three rooms and a back-end video team server room. We deploy our own streaming CDN network and maintain a local Debian package mirror for the attendees so that they can download packages, manage and do Debian work without having to deal with too much latency to get those packages. So an in-person DebCup relies on volunteers. We use a lot of volunteers for every talk and expand that over three rooms it becomes a lot of people. We need two camera operators, a director and a sound technician, kind of the three core volunteers that we need. We also then need a talkmeister and a room coordinator. Then the room coordinator and talkmeister are merged into one role so that they're covered by a single person. But the talkmeister is responsible for doing the introductions and managing the time, so handling question and answer and getting the speaker know when time is up. The camera operator is responsible for managing a single camera. They cover the shots, make sure that the person that they're filming is in view. It's easy to see for people on stream and all of the shots are mixed by the director who sits behind the VoktaMix PC and controls what's going out on stream, whether that's be one of the cameras, the presentation or a mixture of the three. The sound technician mixes the audio. So we've got an array of microphones and other inputs that they manage. So they ensure the audio levels are well balanced and audible on stream so that the people giving the talk can be heard. And then we've got a core team that's kind of the debconf video team on standby for troubleshooting and working out what happens if things go wrong. So if we dig into a bit more of the hardware, for an in-person room we've got two Sony PXW X70 cameras which are SDI enabled cameras. So we do all of our video transfer over SDI. We use Marta Opsis HDMI capture boards. These are open hardware capture boards that are manufactured and developed by Tim Videos in Australia. And they are run or driven by a Miniboard Turbo single board computer which captures the HDMI audio from the Opsis boards and streams it to VoktaMix which is a rented PC. But it's just kind of easier to rent these machines than to have a set of fleet of them that we ship everywhere. We ship cameras, Opsis and the turbots but the PCs because we use a lot of them, we use encoder nodes, VoktaMix and gateways and kind of our use of the PCs scale as based on the number that we have, it's easier just to rent them locally wherever we are doing the conference. On the sound side of stuff we have a lapel mic, we have two of them for the case when there's two speakers, cordless mics and ambient mics. The cordless mics are used for questions and for the talkmeister. The ambient mics pick up the room sound and catch any questions that may be shouted out so that they still can be heard on the stream. All of the sound is mixed using an analog audio mixer and then fed into the main camera so that we don't have to worry about any audio sync issues or anything like that. It's all handled in camera before we hit the VoktaMix machines. We use Blackmagic Declink SDI capture cards. These are unfortunately proprietary but they're reliable and they work under Linux and we have two per machine, one for each SDI feed and then those are fed into VoktaMix. On the hardware side of it there is the Blackmagic SDI capture software, so the drivers for those Declink cards. HDMI to USB is the firmware and software side of the Opsys boards. That's what's run on the Opsys board and the PC so that we are able to capture and stream the video or the presentations. The Opsys also splits the HDMI feed from the presenters laptop so that we have one feed going to the projector and the other one being captured by the Opsys PC and streamed to the VoktaMix machine. So to do the actual live video switching we use VoktaMix which is developed by CCC and our streaming is all done by Nginx RTMP on the front end with FFMpeg being the driver of that system. For recording review we use S review, same as FOSDEM and that has served us very well the last few years. That's where talks can be reviewed so that they ensure that they start correctly and are audible and visually there the whole talk before they are accepted and uploaded to several places. We upload to the Debian meeting archive, we upload to YouTube and we upload to our own PeerTube instance and all of that is managed through another piece of software that Stefano Mavira wrote called Archive and that manages all of our metadata for our uploads of our video archive. All of our machines are managed using Ansible, that's a pixie boot into a basic setup and then automatically Ansible themselves from a gateway machine so that they have all the configuration required for whatever role they are assuming. Then for online infrastructure we have a slightly different approach, we pay down the volunteers required because we don't need things like cameras or audio, we've just got a director and a talkmeister so the director does the same similar job to what they would do in an in-person conference, they do things like switch between inputs, play a pre-recorded talk and start and stop things as required. While the talkmeister does the introductions and handles the question and answer, we also have the core team on standby with access and knowledge to be able to troubleshoot things as kind of we need them to. Then the software that we use, the streaming pipeline is pretty much unchanged, we're still pushing ffmpeg to nginxrtmp and that to a self-managed CDN network. We built a web interface onto voctomics called Virgil and that's what the director uses to manage the various inputs. We use Jitsie for the live talks and Q&A, the only people in the Jitsie room are the presenter, the talkmeister and the director, they all get streamed or get captured into voctomics to be streamed out to people attending so that we don't have lots of people in the Jitsie room, it effectively acts like the stage and we handle questions using etherpad so each talk has their own etherpad link and in that people during the talk ask questions by typing them and then the talkmeister will read those questions out to the speaker to answer. Then what happens is the attendees will then type the answers into the etherpad underneath each question so that there's a record of what's been asked and what the answer was so we export those etherpads to static html to be displayed and archived for later use once the conference is over. In Sreview's job is extended for online conferences, we use it for previewing and approving pre-recorded talks so that they are start and stop at the right times and are in the right format for us to play as well as the post-conference recording review and talk export process. If you would like any further reading on our infrastructure and our setup you can go to video.dev.com.org that is usually a redirect to the docs site which is the next link there. But during conferences that points to the actual video link on the conference website so that's why I've got the full link there and if you would like more details on our Ansible setup you can see it there as well. Right I'd like to head on to any questions. Thanks so much for attending this talk and if there are any questions I'll take them now. Right if you've got any questions feel free to post them in the chat. I don't really see any yet but maybe I'll just take this time to cover a bit about Wafa which is the conference management system that StepConf uses. It's built by the Cape Town Python user group and we run a somewhat modified version of that. It's used to display the conference website so that includes sharing or being able to view the video streams and things like that as well as all the registration. It's used to track attendees who are interested and who actually attend both live in person and virtual events as well as do things like print out their name, badges and things like that for front desk. And we use it for managing our volunteer system. So as I mentioned all of our systems and talks are covered by volunteers and you can volunteer to do a particular task for a particular talk through Wafa. It imports and manages the system or the schedule and then uses that to generate the list of tasks for each room, for each talk so that people can volunteer and go, yeah, I'm going to do that and do it that way. The front end of that, the conference website is mostly marked out as far as I remember and it's Wafa's or Django and JavaScript's server-side rendered templates so it's all kind of fairly easy to deploy and manage. The CTPunk documentation is pretty good for that. But yeah, we've used it since about, I think 2015 or 2016 was the first time and it's been constantly developed to improve since then after each DevConf there's usually a bunch of changes that go back upstream so that the general Wafa instances or code base can use them but there was, yeah, I'll fair but that is generic to or specific to DevConf that remains as patches on top. And recently Wafa has been extended to support different schedule exports. Recently it was just able to export a PentaBARTH kind of XML structure which used by Penta and a couple other systems that allow us to do some migration and management that way but it has been extended to a couple other formats as well for a bit more integration, visual integration into other apps or websites or things like that. Yeah, are there any questions? I've tried to cover as much of the Debian kind of way of doing conferences as possible. It is a rather complicated setup and definitely not as complicated as FOSDEM but a lot more complicated than maybe just a single bathroom kind of conference style. So yeah, if you need any more of the documentation or any kind of write up of how we do stuff video.devconf.org is generally a good place to start or have a look at the DevConf video team, Salsa group, GitLab group on salsa.dev.org for kind of all of our Ansible scripts, our documentation, our other applications that we've written to support kind of our use cases, things like Vohel are there, the archive and upload tool as well. The R2D R2D kind of all that side. But if there are no questions at this point, I'll think I'll end it there. And yeah, you're welcome to continue asking questions or join the DevConf conference infrastructure room where I'll stick around and we can chat there or in the conference organisation Dev Room. Thanks for the talk. Thanks for attending.
DebConf has been online for two years using a combination of our in-person infrastructure and new tools. This talk will describe both setups and what has been done to bring DebConf online.
10.5446/57003 (DOI)
So, welcome everyone. Happy to be at Fostum again. I really missed the 2012 event, the last physical one. But fingers crossed for next year, basically. But I will talk today about running conferences with the PGEU system, which is something that I do. So, as a quick agenda here, since I'll be blabbing away for an hour, I'll start with some interaction and background to do, why am I here, what are we doing and so on. And then we'll have a look at a conference in this system. There are some parts in the system apart from the conference that we will also have a look at. So, we will do a little dive into there and then we'll have a closer look at what Fosnorth looks like before we go to the summary and Q&A. So, hi, I'm Johan. You can't see me. I run a small consultant company called CodeRise, but I'm also part of a startup called Eperoto, where we do some legal tech stuff. I have a background in automotive, but I came there from embedded Linux devices and so on. So using open source since mid 90s and generally hacking around in everything from like software as a service to embedded devices to what not. I run a conference, Fosnorth, an annual conference held in Gothenburg every spring. We do a little podcast as well, even though that's currently host, but we will resume during the spring, hopefully. I will also have a supporter during Q&A, Magnus Agander, who is actually the main developer and instigator of the PG EU system. He's from the Postgresc core team. He's the European president. He's a developer and a committer there and he's the lead developer, as I said, of the PG EU system. He also works as a Postgresc consultant at Red Pill Lynn Pro daytime. So he will be around in the Q&A question and answer all the questions that I do not know the answer to. Some quick words about Fosnorth then. The original intention was to have a moderated Fosnorth. So we're trying to get the diversity. We want all the people, all the projects very cross functional. So it's a conference about nothing to sort of paraphrase what Seinfeld says about his show. We've been around since 2019. It's in Gothenburg. You are more than welcome. It's an awesome conference. It's an awesome place to come to. So please visit us. We've cancelled our physical event for 2022, as everyone else it seems. But you should definitely come by in 2023 and I promise to do my best to bring some really nice spring weather. Around the PG EU system, so it's a conference system. It's built for the European conference. And it's being generalized as we speak and it's used for more and more conferences and in more and more organizations. And Magnus Agander again started this work. So why do we use it? Why does Fosnorth use it? So the background of Fosnorth is really that we came from a bit of a hack. The idea of running a conference and sort of putting it together the first year was almost a scramble. So we came from something like Google Forms for Kofo Papers. We used Eventbrite for the tickets. We did manual invoicing of sponsors. And yeah, just generally a lot of manual checklists and work to get everything together. The PG EU system provides a lot of automation and structure around this and it really helps us run our conference. There are other uses of course. So all the European and Postgres conferences, so PG.com, P.U., Nordic PG.day, PG.day Paris, also the German one. The PG.day that's run as an add-on to the physical FOSSTEM events as well. And then the US conferences at Postgres Open, the NYC one in San Francisco. PG.con which is held in Ottawa but with a global focus, BSD.con and of course Fosnorth. So as you can see the tool is getting more and more spread out there and of course each of these conferences have their own little twists and turns which means that there's yes and no the checkbox to fill or not fill in the tool. But yeah, enough background. Let's have a little look at what an actual conference looks like. How do you run a conference in PG.U.? So basically a conference consists of four parts. So you have the speakers, you have the volunteers, you have the visitors or attendees and you have the sponsors. Those are sort of the ingredients that you need to make the machine run. And these are handled in PG.E.G.Crumph. So let's have a look at them one by one. But first we need to set up the conference. So we need to provide some metadata like dates, names, links. You can specify custom mailing templates and so on for your specific conference. You get some communication tools. You can email different groups of users from within the tool. You can also do news. You can do tweets and so on. You can do some admin work in the tool as well. So scheduling, call for papers and such which is something we will look at. But when you create a new conference, boom, you enter this metadata slide which is where you fill out the basics. So you have a name, you have a date and a location. You put in some emails and URLs that apply to it. So the PG.E.U. system provides the dynamic part of a web page but usually you have some static parts or other parts to the website with the actual fancy sales material and descriptions and stuff like that and that's not handled through PG.E.U.Crumph. You can add admin users, do optional stylings to the ginger templating stuff, things like that. So PG.E.U. Crumph is a Django project, hence the templating and that's how it works. And you can also set up VAT and payment options and so on which sort of segues into that we can actually do some of the accounting in the system which is one of the things that's attractive to me. But yeah, you set up a conference. There is more configuration to be done on metadata level I'd say even though you don't need to specify it upfront. So how does the waitlist work if you sell out notifications, when are they sent, how are they sent, for what event do they send. Email templates as mentioned, you can do a little promo text for your website. Different options for the registration of attendees, do you have t-shirts, do you want to know if you have photo consent, do you want to know their Twitter names. There's even a little checkbox asking if you're willing to share your email with sponsors, those kind of things. You can also specify more user roles than just admins, you have testers, talk-wokers, staff, volunteers, check-in processors and so on. This is kind of interesting because it means that you can have a single server running multiple conferences where you have different users doing the same roles so to speak. So you can have two conferences with different administrators and so on in a single service setup, which is fun. You also have some stages that you can toggle on and off independently, which means that they're not actually stages but sort of options. But as you can see they sort of do provide a set of stages or flow to it. So you can open and close the registration for attendees, you can open and close the call for papers, call for sponsors, you can open and close the schedule, the check-in, conference feedback, different session feedbacks and so on, which then allows you to tune what parts of the dynamic contents of what PGEU system generates is available to the various user groups. This is FossNorth.se, the homepage, which is generated by the system. A lot of it comes from the Ginger template that we provide, but then you have the next event, FossNorth 2020, which is of course an event where you can see the call for papers is open and then you have some news ticking by further down and so on, so dynamic parts. But yeah, looking at the conference, let's start from the speakers. So we have a speaker flow. It opens with a call for papers where you register a speaker profile and your proposed sessions. Then there's a voting on the talks, not by the speakers themselves but the talk voters. And some scheduling done, the speakers confirm their participation and then we're now showtime. So that's the general flow. You configure the call for paper where you can say things like do you want a tag or do you want different tags for talks? Do you want to provide a skill level like intermediate or advanced and so on? But then you do a little HTML-ish call for paper intro that's presented to the speakers when they want to submit a talk. When we come to the registration, PGEU system relies on external user identities, so through OAuth and similar things. This is something that's great within PGEU but where we suck a bit at FossNorth due to time basically. So we currently allow you to register using GitHub and Google. We know that this is not ideal, especially not for an open source conference, so we are looking at setting up our own system in parallel, so basically id.FossNorth.sc or something like that but it's time permitting. Given that we have COVID this year again, it might actually happen. But then when you've submitted all the talks and you've done that part as a speaker and that's fairly straightforward in how that looks, you end up in the program committee. So you have a number of talk voters and as you can see from the screenshot below here, we have different sessions and I blurred them out because some of them are accepted and not accepted and so on. I don't want to share that and I also don't want to share the details of my fellows in the talk voters group. But as you can see, you can grade the different talks and then you get an average and you can add comments around the talks and stuff like that, but at the end of the day you sort of reach a conclusion what's accepted, what's not accepted and you can also put them into reserve slots and things like that and then you click the big commit this button and emails are sent out to all the speakers. Everyone who got accepted needs to confirm that they're actually willing to do its thing. Everyone who's not accepted, get their sorry you're not accepted and the reserves get told that they're reserves. So quite nice actually, you can do that in stages if you like. So usually you want to prove some talks and maybe you don't want to decide on the not accepted because in some cases you have two very similar talks and you only want one of them. So it's not a not accepted, not accepted because the talk is bad, it's not accepted because the other talk is in there so you want to wait for the confirmation and that's something you can do as well. And then scheduling, this is something that FOSSNorth does not use in PGU system because of historical reasons. We have a nice template on the way to do this in our crazy ginger templates that we use for the static web page generation and it also allows us to handle some metadata that's not in the system like now, like the video links and stuff like that. So basically you set up a number of tracks, rooms and slots in those rooms and then you put the sessions into them which generates a schedule that you then can provide to your attendees both as sort of a web page or even as JSON data that can be rendered in different ways and stuff like that, generally very handy. To be honest we should really use it but yeah, again, time. You also have a number of reports associated with this. So I've highlighted or sort of screen shot something from a live conference here. You can see if you have uncomparmed speakers, unregistered speakers and speakers who's not checked in so they're not at the conference physically. And as you can tell they are highlighted, the buttons, if the system feels that there still are, there's a reason for concern. You also have some help with the scheduling so you can see the number of sessions that has no track and things like that. So you can make sure that your planning is actually valid. This is a very handy part with putting things into a database that you can actually generate all of these reports that usually are spreadsheets or notebooks with little check boxes next to them. It makes it a lot easier to deal with a big conference where you have multiple tracks and a lot of speakers and parallel events and stuff like that. This brings us on to the visitors or attendees as I think the system calls them. And again, we have a flow. They buy tickets and then they check in at the event but there's more to it. So you can have promotion codes with different discounts. You can have waiting lists and you can have optional packages for different parts of the conferences. And in general, this part of the system is quite complex. So you can break your conference apart a bit like we did at Foss North where we have a training day and we have the main event and you can even have sub-events within the main event and stuff like that. You can really, yeah, you can configure this in many, many different ways. So where do we start? You create your tickets. So you have registration types and registration classes which you then use to generate various tickets. We're probably abusing this a bit but we say that there are conference tickets, there are student conference tickets and there are speaker tickets. I think you could do conference tickets and then have the registration class attendee and student and still get different costs for them but we don't. And for the speakers, of course, it's free to register so that's something that's only available to them. And the sort key just presents how it's sort of ordered in the ordering system so that people know what to expect. And the prices here are without VAT so it would be 500 kronor and 200 kronor if you want to come. So 50 euros or 20 euros depending if you're a student or not. And then you can have additional options which is something that we tried in 2019. So we did a training day with heavily discounted one day trainings. Much appreciated that we actually got the trainers to do this and they also gave talks to you in the actual conference and I know it was really appreciated by the people attending. But here you see that we have a maximum number of users to 10 for these events and you have an additional cost to your ticket price for these various things and these are sort of upsells that you can add to your event. Promotion codes. Recalling collector you can give fixed some discounts or percentage discounts. It's something that we use for booth staff for sponsors. If you buy a gold sponsor your package you get X number of tickets for free and then they can register their staff that way. Which means that we get them as attendees so we can get them into our reports and sort of order lunch for them and so on as for all other attendees. We've also played around with things like sponsor specific discount codes so a sponsor can get like a 10% discount to spread on their side so that we see which sponsors are actually promoting the conference. And yeah we can generally track things by providing discounts which might feel creepy but it's a good way to do it and people get to the conference slightly cheaper. Then comes the other side of thing the refunding. So you can try and configure this in different ways in the PGEU system so percentages or how close to the event you can get and things like that. At Frost North we have the policy to refund 100% until our cost is incurred which is usually when we have to confirm the number of people having lunch. Which means technically we do pick up the transfer costs when you pay by cards and stuff like that but it's yeah. We want to be fair and we have sponsors to cover for that so our ticket prices are usually our food cost to be honest. But yes this is another aspect that's then automated which is kind of nice as an organiser because it's not only additional work but it could also be an awkward discussion when you want to do this. So as you know you set up very strict discount rules so it's very clear to everyone what if statements will be applied and you just click a button and you get your money back. Nice and easy. Then you have a set of reports you can see the number of confirmed visitors per day so you can plan with your venue you can see how many visitors were not checked in during different parts of the event and you can also here do more generic database export in various formats. You can even have a badge template and stuff in the system but we use this to just export a big comma separated value spreadsheet that we use for our custom badges but you can export first name, last name, company name and ticket type for everyone who has paid their invoices and that are supposed to attend a certain part of the event for instance. So quite powerful and again it's you can automate more than we do at Foss North but it's also a very powerful enough tool for us to get away with our sort of botched together hackish approach to things. We will look at payments more in the accounting section but it's interesting to know how it works. So every visitor gets an invoice with a due date so I think we have a due date for like 48 hours or something for tickets and then we can track if they're paid or unpaid in the various reports that we get and these things can of course be paid manually which is something that we do offer for people where we don't really promote it because it's manual work for us. PGEU supports a lot of or PGEU system supports many payment providers like Adien, Stripe, PayPal, Brainty, Trustly, TransferWire and TransferWise and they support card payments and other payments with these different providers. For Foss North we use Stripe for our card payments basically and we have a Swedish bank with bankierut as it's called for the bank transfers. But it's quite convenient when you have one of these payment providers integrated because it means that the whole invoice is paid not paid, expired and so on is just handled by the system. It just works. You don't have to do anything, you just get a report and money on the bank so that you can run your conference. This brings us on to the volunteering part. This is for the organizing of the conference and this is something that I've never used much because Foss North made the decision early on to rely on sponsors and then our venue. One of the things that eats time for conference organizers is people management, handling volunteers and things like that. So we decided to go down the path of actually having sponsors that cover those costs and we generally have very few volunteers, like three or four people in total and we do that manually or verbally. But you can configure different slots for the volunteers, you can assign them to it, you can create volunteer schedules, you can also specify how many people are needed and how many people are allowed per slot and then let the volunteers self-register. But in general, you can make a group of people handle themselves and you can generate reports and you can fill the gaps and things like that. It takes away a lot of the hassle of actually managing the volunteers which otherwise takes quite a bit of time. Then we go on to the sponsor part. And again, this is something that for historical reasons Foss North does not really use. We deal with the sponsors manually and individually. The PGEU system, it allows you to specify standard sponsorship contracts and for the sponsors to buy these slots themselves, which is really nice. And for a conference with a lot of where you attract sponsors easily, that's of course the way to go for Foss North, we sadly have to call our sponsors and remind them which means that we can't really force them to do this themselves. But you can also provide packages for the sponsors, you can have a lanyard sponsor, lunch sponsor, things like this. And then you get a number of reports around these things that of great help use to make sure that everyone's where they're supposed to be and stuff like that. This brings us on to the additional parts, which is where I was sold on the system to be honest. News is one thing, so you can have global news and you can also have news per event, which is kind of nice. You can have a news stream for Foss North 2020, but that's also aggregated into the front page and onto the front page we can add sort of non-conference specific, more organization specific news. You get the RSS flows and stuff out of that as well. So it's kind of convenient. But the big one, the big sale to me is accounting and invoicing, which kind of sucks otherwise when running conferences, that's definitely part of the boring work. It lets you run the bookkeeping of your conference organizations inside the PGE system. And this is of course sort of a take it or leave it thing. So if you want to do the bookkeeping in the system, then you need to commit to that and do the entire bookkeeping inside the system. Otherwise you end up with a mess. And I don't think that's what the countants like. The system uses the cash method. I'm not sure if that's the correct English term, but it means that the invoices appear in the books once they're paid. So you don't have the hold. Somebody owes me 200 kroners and now they didn't pay because they didn't pay the invoice within 24 hours. So now I have to immaculate the invoice. This is a straightforward method. And then if you happen to sort of have your fiscal year end in the middle of a sale cycle for a conference, which I highly recommend you not to do, you have the reports. You can get the number of paid and unpaid or sort of outstanding invoices, which is technically a depth to you and things like that. So you can do your full economical reporting in a good way. And the really attractive thing here is that since the invoices are generated by the system, the invoices are also book kept or accounted for in the bookkeeping automatically. So basically you start with an account structure that lets you set up it for your local rule set. So as you can see here, First North uses a boss, Conto Plan, as it's called here in Sweden, which is, I don't know which years standard we used, probably 2019 since that's when we started using the system. It allows you to set up all the accounts and account groups and stuff like that so you can get all your sums right for the tax declaration basically and all your economical reporting. Then you can create view and track invoices in the system, which is nice because as I said First North does the invoicing manually, but we still generate the invoices within the system. If you go for the whole Visitors and Sponsors self-register stuff, all those invoices are automatically created and put into your bookkeeping. You don't have to lift a finger to get it done, which is very handy, especially for the visitors because that's a few hundred invoices that you either manually have to bookkeep. And then you have the whole paid not paid where if you use a payment provider that's tightly integrated like a card service, you get this checked automatically, but you can also manually mark them and sort of refer to, so when we get a bank transfer, we can refer to the ID number that we get from our bank when registering that transfer or registering the invoice as paid, we can refer back to the actual transfers, we get the traceability. You can also have this semi-automatic flow, which we currently haven't used, where you do an export from your bank and use or consume the entire export into the PGU system and it then uses that to automatically update the bookkeeping and sometimes ask you to just complement or complete some of the line items in the bookkeeping for you. For the invoices, you provide a custom invoicing template, so we have our logo there to look fancy, our own VAT numbers, stuff like that, and then it generates a standard PDF and as you can see probably on the thumbnail, there's also a little QR code, you can find invoices in our system and if you click to there, you can pay by card and stuff like that or otherwise you have all the bank details, you can do bank transfer. It automatically sends an email to whoever is supposed to pay the invoice and things like this, so all that is handled seamlessly, very, very nice. And then at the end of the day, you get a transaction list, instant automatic bookkeeping, awesome, where you basically get the whole debit credit set up, all the transfer between different accounts and so on and you get it for years, you can close your years and do your fiscal reports and yeah, everything is nice and easy. It's a lot easier than when we used GNU Cache and manually registered 200 people per year plus all the sponsors and all our expenses, which took a decent amount of hours to get right. Then the FOSS North setup, so as I said, FOSS North came out of a bit of a hack, so we primarily use the system for corporate papers and the talk voters, we use this for the visitors and all the invoicing to visitors and also the invoicing of everything else like sponsors and stuff like that. But we don't use automation around sponsors and volunteers and those parts. There's also the whole membership section that we don't use either where you can have members in your organization that pay an annual fee and stuff like that, which is something we don't use. Again this is what the page looks like, you can see that it's a template and we have a little for loop for events, we only have one event, we have a little for loop for news, we have two news, way too old to be honest. The rest of this is sort of a part of the template, so it's the things that we put up there, where we point to past events, where we point to our meetups that are completely independent. So FOSS GPG is run by the FOSS North crew, while FOSS Stockholm is run by Daniel Stiemburg and Klaus Jakobsson and Co up there, but we still consider them close enough and friendly enough to link to them, some Twitter integration and stuff and also in the menu at the top you can reach the pod which is generated by a completely different system but integrated into the same website. You also have different account views, so this is your account tab for a speaker where you can see that I haven't made any subscription, but you can see the call for paper, links to create new subscription and you can remove submissions and stuff and you can update your speaker profile. Again, great that the speakers can use Go-In and do this by themselves, removes a lot of like tedious works where you always sort of miss one email and things aren't up to date. There are some future steps for FOSS North, so of course we want to upgrade to the latest version which is something that we're going to do after our 2022 event. There's been a major version update for Django for instance, I want to follow along. And then we really want to set up our own ID server with OAuth to integrate it so that we can actually provide account registration based on email and not only rely on other corporate services providing it. And of course we want to get through COVID-19 and get back to physical events and meet all of you in Gothenburg, in Brussels and wherever because we miss all of you. So a very quick summary then. You have FOSS North at fossnorth.se. The PG EU system is hosted at Github. There's a lot of stuff under PG EU with their different templates and so on that they use for their events. It's actually a great resource to see how to set things up. So big kudos to them for sharing. Under the list, postgreskill.org, you also have the PG EU system mailing list for users which I recommend you to join if you're curious about the project, want to follow the development and also share insights and learnings with other users of the system. And that really brings us to the end of this slot. So a big thank you for attending. And this moves us over to the Q&A section where myself and Magnus Agander will try to answer all your questions. Thank you for listening. Well thanks for that talk. Yeah, if anyone has any questions, please post them in the chat and upvote the questions that you like by reacting with a thumbs up. And we will start diving into those questions. The first one is does the PG EU system currently offer any mobile app integration? I've been reaching out to FOSS North before. Yeah, so maybe I can start it. We don't use it from FOSS North. I'm terribly sorry, but I think I put you in the spam folder because there's lots of offers of different support services around conferences. But I know there's been something for check in Magnus, but in general, yeah, I guess you can comment it further. Yeah, I think that's about it. I have a feeling that I may also have put offers from the same person as well into the spam folder because for every year we get 10 to 15 different companies trying to sell us a conference management system, which we already have. But no, we don't have a mobile app integration. Our conference are at different levels of success targeted having a mobile friendly website and focus around that. There is a mobile app for the check in process. That's correct, Johan. There is actually, there is an Android app for the check in process. There is sadly no iOS app because we don't have any iOS developers. But it's basically the only reason that we have it is that it's faster to do QR code scanning from a native app than from a web app. But we do also have a web app that does that for those that don't, that can't use the Android app or don't want to use the Android app. But for the main conference system, there is no mobile app, just web. Cool. Then the next question or statement from Null Value is, I'm amused that two things that I'm most interested in scheduling and volunteer management. He mentioned FOSnorth doesn't use. Yeah, and then I'm looking at Magnus. Yeah, it is true. Obviously, we do use those. And the volunteers, it has helped us a lot in keeping track of the volunteer schedule. It's sort of a two-step process where you can assign different slots so we can, for example, for the registration desk in the morning, we want between four and seven people to be available. And then during the rest of the conference, we only need one, for example, like things like that. And then we create a team of volunteers and just email them and say, hey, please sign up for something that works for you. Because, for example, in our events, we always have a room host in every room. Sits at the front helps the speaker with the timing, liaises if there are technical problems and all sorts of things like that. And this way, our volunteers get to choose to be the room host of a session that they actually are interested in seeing as well, like they get to being in control of their own schedule, can't we? And then there is a dual step that if the volunteer signs up for something and the organizer has to confirm it, or if the organizer goes and adds the volunteer to a slot, the volunteer has to confirm it so that everybody knows it's happening. You get an iCal export so you can get your own personal volunteer schedule up on your calendar in your phone and stuff like that. There's another part that we don't use either that I know has come later, and that's the membership management, and I'm sure if you want to mention a few words around that as well. I mean, technically, I guess that's not part of the conference system as you mentioned as well. There is actually no connection between them at all except they're running in the same basic system. We have a system where you, in Postgres Europe, who developed the system from the beginning, we have members of our organizations and then we run conferences. Today they're completely separate. But the membership system has things like elections of the organization's officials, and you have a membership, you pay a fee, you get the same integrated invoice management, which is actually, I think the original reason we added it was to get the integrated invoice management for membership fees. We've talked about but never actually got around to doing it, being able to do things like automatic discounts at events for members. It would be trivial to do it based on how the system is. We just haven't done it yet. Cool. The next question is, did you have a look or compared to projects like Pre-Talks or Pre-Tics, or has your system just been around for much longer? Well, Johan can speak for Foss North. I can speak for when we started building it. At the time we were actually looking at Pantabarf and we came to the conclusion that for the needs that we had at the time, it would be less work to write our own than to set up Pantabarf. But this goes back far in time when writing our own was much, much smaller than what we have now. It was basically registration back then. Everything else was the way that Johan mentioned that he used to do. There was a Google form for call for papers. I think we even just had email us for the call for papers kind of thing. But that's back when our event was 50 people. And as we've gone through, we've added more, nowhere up to 600, 700 people at the events normally. And obviously that would not have worked anymore. So at the time we looked at that, we looked at something else. I've not actually heard of these other systems. So I'm guessing they didn't exist at that time. They might have existed, Johan, when you started looking at this for Foss North. The thing is, Foss North, I mean, we had event write and Google forms basically and then spreadsheets and lots of manual labor. And then you came around, we had some beers and you said, if you have a VPS, I can set it up for you. And so we didn't review that many systems because nobody else came with an install service and then this informal SLA that I bug you and you fix things. So no, I haven't looked around that broadly. OK, then the next question is, is there some sort of delegated authority? For example, FossDem has dev rooms who should be able to schedule their own room, but not anything outside that. So the way, yes and no, the way that the system kind of works and we actually use this level of delegation for Postgres Europe, which is we have one installation that is for Postgres Europe. And in the installation, you have something called a conference series, where one of our conference series can be what we have the Nordic PG day, for example, which I'm part of running. The other one is the FossDem PG day, which normally runs on the Friday before FossDem. Obviously, not this year. We didn't think it was worth doing our own virtual events. We were just focusing on our dev room tomorrow. So for each of these conference series, you have an ability to delegate certain set of permissions. And then for every individual conference, you can delegate permissions. But within the conference, you're an admin or you're not an admin or you're a staff or you're not a staff. So for a situation similar to FossDem, what you could do is just install an entire instance for a year of FossDem and make each dev room their own conference. Or at least a conference series for each room or something. It would not be a perfect match for that. But it could be kind of worked around like that. Next question. What is your current number one point of friction or annoyance with the system, if any? It's more about a mismatch of how we run things, I think. So we refer back to Dev's talk before. We always have what we call like two, three seed speakers that we try to announce before the call for papers, at least for the physical events. And those we sort of agree on a topic and a bio and all of that for the website early on. And then you have to go back and ask them to register in the system because there's no way to sort of force a user into the system by design. But it's a fairly minor thing. Please register a placeholder there. I can help you fill out the talk details, blah, blah, blah. It's one email. It's more or less that. Another thing that would be nice is to have, let's say, dynamic forms. So when it comes to the call for paper, maybe adding options to that call for paper that's specific to specific conference instead without having to sort of hack the actual Python code. But yeah, the minor things more that we run the conference slightly differently. Yeah. I'll point out, just to cut in there on the second one. I believe we actually had a Google summer code student work on the dynamic forms part. I don't think necessarily ask a way to extend the call for papers, but they could probably be done for that, but more as a generic infrastructure feature throughout the system. I don't know exactly where, how far along that project is. It was last summer, so I guess the GSOC part of it is done. I don't know how close it is to being ready to merge in. You mentioned that I should also mention that we run an older system. So the idea is to do an upgrade now after the call for papers for PostNorth this year is done. So I think February 13 or something like that. So there are probably some rough edges that it's been taking care of in the meantime. There are some, and I'm sure there are a bunch of them still there because the system has been purpose built for the events that we run. So obviously anything that's run slightly different will at least potentially have a set of frictions there. I mean, we want to make it useful for everyone, but given that our perspective comes from our events, we don't know these until after the fact. Cool. If you've got any more questions, please feel free to ask them in the chat. I don't see any more in the list of questions at the moment. But yeah, if you do have more, please feel free to ask. Is there anything that you two would like to add or comment on the talk in and so far? You're on. Go ahead, you gave it. Yeah, I sort of put the tag post. It was prepared very, very late. But no, the automation of the system, it saves the time. So even if it's not a 100% match, it helps with so many things that are tedious. Just the function that you can sort of email all the speakers, standardized information or contact all your sponsors and so on without having to manage that outside the system is worth quite a lot. And then the whole bookkeeping system is brilliant. I mean, that's my pain point personally. That's what I really hate and that's where I procrastinate. So having that just working out of the box is a real killer feature. I notice how you bring up the sponsorship system, which you're not even using. I mean, you're clearly using the send emails to sponsor a part, but not the rest of it. I mean, you have a benefit with the PGConf that you have sponsors coming running to buy your sponsorship slots. So I think you offer standard contracts and they sign up themselves. I unfortunately have to call my sponsors and sort of make them buy it, which means that there's a slightly more willing and dealing involved, but we still invoice them and sort of register them and so on. So it's, yeah. Yeah, I mean, there are two parts to it. One of them is the sign up part, which obviously it does not work in that case. In our case, we do actually proactively, for some of our smaller events, we do end up proactively contacting sponsors, but then we just refer them to existing standards dealing with it. But there are things that you could probably still benefit from after they've signed up and once they're in the system, like you mentioned this dealing with discount codes and stuff and you can get again, all of that fully automated. You can see a sponsor at this level gets five free tickets and it'll just issue them five free tickets. So you don't have to remember to do a few things like that. And then from our perspective, another thing that I came to think about and Deb also mentioned it when it comes with accounting and economics in general, that so we use this partly as a replacement for Eventbrite. So it would actually be able to pay for the tickets and so on and handle discount codes and stuff. But that requires you to have a payment provider. So we're using Stripe for FOS North for instance and that probably requires your conference to have a legal entity that can sign those contracts or gets. I mean the way around that for everyone right is PayPal these days. You can still do PayPal, PayPal requires not that. But a credit card processor will require there to be an entity, yes. Then signing up for PayPal and that's a tip, make sure not to keep all your money in there because after a certain number of transactions they will ask for an actual verification of your account where you need to reply with like a letterhead and stuff. Which could be a pain in the ass if it comes really close to your conference date because it takes a week or so to just get cleared in that process. But yeah, apart from that it's pain sailing. Yeah, another experience in general has been, I'm sure that's true of anyone using these services right, if you do have an entity getting onto one of these sort of native credit card providers whether it's Stripe or Adion or Braintree that gives you a much better end user experience. Cool. Well, thank you. I don't see any more questions in the DevRim channel at the moment. If anyone has more questions you're welcome to join the conference room after this. It will be linked to it will be posted in the DevRim. So feel free to join us afterwards. I'd like to thank you again for making the time and having us talk and hopefully we'll see you in person next year. Likewise. And thank you, Joang, for giving an interesting talk about software that I wrote. It's interesting to hear what other people say about it. So thank you for that. I'm going to join you in the Q&A and actually being able to answer the questions. Cool, cheers. Cheers, thank you. Time for the beer event, right? Exactly. Nice.
The pgeu-system is an integrated system for managing conferences. Originally developed for Postgresql Europe's own conferences, the system has grown into a generic system for managing events, visitors, speakers, sponsors, and volunteers. It integrates the functions such as call-for-papers, managing schedules, invoicing, accounting, and more. In this talk we will do a whirlwind tour of the tool, from the context of the foss-north conference.
10.5446/57005 (DOI)
Hi everyone, welcome. So in this session I'll talk a bit about streaming and editing conference videos with OBS, Jitzy and Caidon live. So welcome. Quick agenda, I'll give some introduction and background, talk a bit about the pre-COVID setup that I'm coming from and what happened 2020, which was slightly painful, and the virtual setup that we're running now, and then a quick summary followed by the Q&A. So hi, I'm Johan. I do some consulting through my company CodeRise, I'm also working part-time in a startup called Epidoto doing legal tech. But the reason for being here is that I arranged the FOSSNorth conference. It's a conference on a pod. The pod has been asleep during the fall but will be reactivated. My background is Qt Linux, I've done lots of automotive work, licensing and I find startups and startups with open source and pre-softer are very interesting. FOSSNorth real quick. So this talk is really about the experiences gathered through FOSSNorth. Well, the original pitch was a moderated FOSSSTEM. We started in 2016, have been going annually since then. In Gothenburg every spring April-May-ish time frame, when the weather permits people to actually move about up here in the north. And of course you should visit us. We've unfortunately gone virtual for 2022, but next year you're more than welcome to come up and see a lovely city and a great FOSS conference. So let's talk a bit about the pre-COVID setup, which gives some backgrounds to where we're from. So we ran from 2016 to 2019 and will continue from 2023 COVID permitting as a physical event. This means that you record with multiple cameras, recording audio separately through various microphones, but also through the PA system. And for 2020 we actually got the equipment to do separate screen recordings of the slides, so that we actually want to combine a camera and the screen recording. And this is one of the reasons for landing in Cayden live, so to speak, in the video flow we had there. The very first reason for picking Cayden live was actually the one of the cameras we used had a fisheye lens. And in Cayden live it was quite easy to de-fisheye it or sort of flatten it. And of course I'm a bit of a cute KDE person, so it was one of the first tools that we tried and it suited our needs, so hence the choice. There's another aspect to the physical event and that's that we had multiple audio tracks. So of course we had the audio from the video recordings, which is usually quite crappy. And then in later years we started getting these Zoom H5 recorders, which allowed us to connect directly to the PA system and connect to the microphone on the speaker. And again, here Cayden live is really nice, so you take the video recording audio and set that as your audio reference, because you know that it is in sync with the video. And then you can actually align the video or the audio from the microphones with that audio and then that usually lines up quite nicely and then you don't have to spend ages trying to lip sync your video or having videos where the audio is out of sync. We also do some other basic video working in Cayden live, like the intro and exit things, which is more of slideshows that we put into the into each video. So so having all our sponsors go by, also a little intro slide per talk. And then at the end we have a YouTube friendly outro picture that that stays on the screen for 20 seconds where we can put like subscription info and other videos and playlists and whatnot on it. So to to feed the algorithm, so to speak. All of this was nice and easy. And then 2020 happened. 2020 happened just before the event as well. So we really had to scramble and basically abandon any principles and getting something that works ready in short term. So it meant relying on the tools that we used at work and not necessarily going for the open source alternatives. We basically had a couple of weeks to get it up. Our goal was to live stream, but we also wanted to publish the recordings like one talk by itself for afterwards. We wanted the talks to be live. This is a pre-recorded talk. Giving a live talk to a screen is also really awkward, but I still think that it gives it a little bit of edge. And we wanted to be able to do live Q&A as well. So we sort of ended up with live streaming through YouTube. Then we put the recordings both on YouTube and on a peer tube instance. For the live talks we had, we felt back to using Zoom because that's something where we had experience and are working set up. And for Q&A we used Slido again because we had a working set up. We were able to get free accounts from these things. So I'm very thankful for them helping us getting up and running, but this is not open source software. And then as sort of a spider during the web, the broadcasting hub, we use OBS, which is the black circle to your lower right corner. So yeah, we saved recordings from YouTube, but we also saved on the cloud side Zoom recordings, which was good because it allowed us to save some audio hiccups when the speaker wasn't unmuted at the right point in time. For instance, we could take the audio out of the Zoom recording, which was sort of our raw input from the speaker and put that over the YouTube recording and actually save that part. Then we had to cut things into pieces and we also did some editing in the Q&A section, which is something that's happening in the live events as well, that maybe you get repeat questions or things like that. So that's usually the part where you need to fix the video. And then we of course prepend and postpend the intro exit videos. The general flow then is to upload into YouTube, get all the metadata right there, like links back to the talk slides and stuff like that, and then to do an import from a PeerTube instance that works flawlessly. If I recall correctly, all that we have to do is to say that the recording is in English on PeerTube and then it works. But yeah, this is not very open. We realize it. We had to scramble and I still think to this day that it was better to get a conference going at all than to stumble on not being open. But we do have a virtual setup now that we're more comfortable with. And it's roughly based around the same ideas. So how the show is run. Each speaker joins a JITZI meeting. That's all they see. They go in and do their thing in JITZI. There's a question and answer facilitator, monitoring the YouTube comments. These can be the broadcaster, but that's a bit of a handful. So we're usually at least two people in each session. And then the broadcast of persons, the person running OBS does the whole juggling of audio and video and what to show to YouTube and the attendees. And this whole setup with OBS and so on also gives us the opportunity to interact with the speaker behind the scenes. So we can talk post show and pre-show and make sure that screen sharing and everything works. And then to show the world once we know that everything's up and running. We have some OBS scenes. So this is an attempt at summarizing the different types of scenes that we have. But basically the first one is a set of still pictures. So we have all the session times. So streams started at 1400, 1500 and 9am, whatnot. So that we can sort of get the stream going before the session. We also have a we'll be right back. Something's broken, which was something we learned the hard way the very first time we did this. Sometimes you have a technical hiccup and it's good to be able to sort of show everyone watching live that this is what's happening. This is of course something that's edited out once you do the post show videos. And then we also have a plain logo to use to put there like an hour before everything starts and stuff like that. Then we have a number of pre-recorded videos. So at least we have an intro and an exit for each session with sort of pre-recorded message from sponsors and stuff like that. But also some of the speakers and we see a shift here. I mean it's we've been virtual now for two years and it's more and more speaker wants to do pre-recorded talks. We still want all the speakers to do a live Q&A with voice. And yeah, this is the sad thing about having to do virtual events really that we miss the feeling of live. And then the last part is really the browser plugin. So we have OBS join into a GTSI room and record it. And then we do some overlays. So since we used Slido this year, the year that I've taken the screenshot from, we sort of have a little overlay in the corner where to ask questions on Slido, things like that. But we're basically recording whatever the speaker shows over at GTSI through the browser plugin in OBS, which also means that you can join as a host in a browser and then on the same machine join as the recorder. Which is quite nice. So it's not a screen recording or anything. You don't risk all your notifications sounds and stuff from your desktop interfering with the recording. It's the pure GTSI recording that you get. I can also recommend the studio mode, which is a great thing with OBS. You can actually get a preview of what you're about to show and you can do the transition to it. So this is great in a live scenario where you can sort of prepare a bit and just not be surprised about what you actually happen to put on the screen. GTSI, you have the broadcast or the the question and answer facilitator and speakers. It's top three people in the room, maybe four if a speaker decides to stick around and you have OBS. So it's quite a light server load. And it also means that the speaker only needs to worry about the GTSI meeting, which is then a web thing. So all you need is a fairly modern browser and it just works super easy. There's some juggling of audio. So me as a broadcaster can mute and unmute myself in GTSI. I can also mute and unmute myself in OBS and I can mute and unmute GTSI in OBS. And this really gives me all the combinations. So I can, the speaker can speak all alone. I can speak to the speaker alone. I can speak to the audience alone. Me and the speaker can sort of, everyone in the GTSI room can speak privately and everyone in the GTSI room can speak to the audience. And you can juggle this around so that you can, yeah, you can talk behind the scenes or you can talk in front of the audience. I can speak to the audience without having the speaker being heard on the other way around, things like that. Quite handy, but also, yeah, juggling is the right word. It can get confusing. So you better not be be a stressed personality when handling this. I've done mistakes and been able to save it through recording additional video streams. But yeah, it's a bit of a juggle. Then in this setup, everything is saved by YouTube. So as we get the full livestream, a multi gigabyte, like 10 hour per day things saved. We also save it locally on the broadcast machine so that you don't rely on YouTube for that. We decided not to use the GTSI service side recording because it was a bit of a hack. It felt like maybe hack is the wrong word, but it was a complex setup and we decided that we can do without it. So we rely on having double copies, but we don't do a service side recording of the GTSI room, which in some respects might have been nice if you want to, well, as I said about Zoom, to save audio, for instance. There are a number of possible improvements. For me, the OBS package in Debian does not include the browser module. There's a bug for it. It comes down to the whole Chromium thingy not being packaged for Debian. One of the shortcomings with using GTSI is that it's hard to have a side channel to the speaker. So you can't really communicate time and issues without intervening with the actual talk. So if we're having technical issues, you put up the post, the scene, and then you mute everything and you speak directly to the speaker, but you end up interrupting the speaker and it's hard to sort of tell them that it's time to stop. We've tried with the chat, but it's not obvious from the speaker that you need to look at the chat in GTSI. We would also like to try live streaming via PeerTube, something that's in the works. And then we're still using a service called Ophonic for audio processing that does magic to the audio. It can correct loudness, but it also has machine learning based filters for noise and hum reductions, which is quite nice. There we haven't found an open alternative that's as powerful as Ophonic. So it's something that we do use. As for alternatives, I bumped into Big Blue Button, which is nicer to the speaker. You see your slides, you see the Q&A comments, you're in the same room as the audience. So it's a single tool for all of the purposes instead of this hack that we're using, but it's a bit complex to set up. Last time we checked it had very specific versioning requirements on the servers and so on. So it's not something that we've spent time on. Very quick summary. I want to share the resources. I'm hosting a little thing, virtual conf resources, which is basically a markdown collection where I try to describe how we run FOSS North and various resources, various projects. Feel free to contribute. Pull requests are more than welcome. Here are some links to OBS, Jitsi, PeerTube in general, but also Comptube, which is the instance that we use. It's a PeerTube instance focusing on conference recordings and Caden Live, of course, and then FOSS North. But that's the whirlwind tour to our video setup, and I hope I made it on time. Thank you, and please join me in the Q&A. All right. Thank you for that talk. We do have a few questions. I'll just go over them. The first one is asking if that plug-in is doing some get-display media and then sharing that tap, or how does that work? So I think that was asked around the OBS browser plug-in. So from what I understand, it's a chromium instance running inside OBS, which is kind of nice. You don't need a browser apart from OBS, and it lives within OBS. So you can actually juggle around the windows. You don't have to be concerned about obstructing the view or anything. It's a completely separate thing. Right. Okay. A second question was, do you have any recommended video and audio configurations? It's kind of hard to answer. So at the moment, we only do one single video stream, and then we have multiple audio sources that we can mix them together and then pick the best one. It's not that complex. It would be more complex if we had the slides separately. But we run it fairly straight forward. There's also this repository where I keep sort of, I've documented the setup, and if you want the exact set of scenes and so on, I can of course add that to that repository and make it available to the broader audience. Right. Okay. Right. And then the final question we received is how would you provide subtitles? I don't know. That's something you thought of. I mean, live we don't. I guess you could possibly do that with some sort of a recognition engine. I think that Ophonic actually allows you to do that with some sort of voice recognition technology, but we haven't tried it. So just thinking out loud, we upload to YouTube because then we can import into PeerTube. The other way around doesn't work. So if we were to add the subtitles to YouTube, I hope they would survive that transition. So that's probably what I'd try, but I haven't tried it at all. Right. Okay. I think those were all the questions. Let's maybe give it a bit of time. There might be some other people who have other questions still. And I'll dig out that link in the meantime. I saw there was a question for the repository. It was in the slides. I'll make sure to add that link to the chat so that people can find it right away. Yes. And it's intended to be sort of a generic repository. So if you're free to add your details in there. I do see some clarification. Ke was saying that he was basically wondering whether the audio video settings would be similar to like with FOSTA in terms of frames per second and container format and that kind of thing. We do 1080p at 30 FPS because that's what my internet can survive, but also what YouTube likes to consume. But there hasn't been any thoughts about it. I think FOSTA does 720p at 25. Yeah, exactly. Yeah. And then for audio FOSTA also has a camera and a lapel mic, that kind of thing. But I think it's mixed together. So or it's basically on two separate channels on the video stream. So in that respect, it's slightly different anyway. Yeah. But that's actually one of the killer features in Kaden Live. I'd say to not have to do audio syncing. So as we do what we call a pod, but we also sort of release it on YouTube where you can watch us talking and not only the audio. And there it's quite helpful with this audio syncing because there we usually have like three or four recordings and you don't have to spend time in getting that perfect. Right. Let's see what else I can see here. Is there any other interesting questions? I think we handled everything. Oh, no, here's another question. Are phonic works on live stream also or is it used for post processing? It's only post processing. So I've tried playing with Audacity and FFM and so on and various filters, but they have some sort of technology that applies to filter at certain points in time. So I can do loudness, I can do noise reduction and so on, but they are able to do it in a way that it doesn't sound like you're in a can. So that's the only thing I haven't been able to replace, so to speak, with open technology. Right. Okay. I think that's everything. Oh, sorry. I just wanted to comment that it's always this, the whole how do we do it for the virtual event is a hack because the intention is always to go back to live. But you don't want to spend too much time on it either. It's always a bad thing. I guess, yeah. Right. Thanks for the Q&A and for the talk, I think. I guess if there's no further questions, we can leave it at this. Or we do see somebody asking about video.ninja. Never heard of it. Is that something you've heard of? Nope. Never tried it, but it does sound interesting. When it comes to tooling, one of the things that I'm missing is the ability to record like web cams and screen captures and audio as separate channels over the internet. So I mean, when doing a virtual talk, when doing a podcast to not get the mixed audio but getting separate streams. Yeah, that can sometimes be useful. But like you said, it is complicated. It is complicated and you get the timing issues and also you add on bandwidth. And then you need to force everyone joining your session to use your tooling, which is... Right, yeah. Not ideal. I can see Carol typing. I guess we let her finish. Yeah, I was just about to say the same. Oh, she's commenting the video, Ninja. Yeah, yeah. Okay. Right. I mean, we do still have several minutes left. About five minutes. So I would suggest that we maybe stick around a bit if there's nothing interesting, then we can leave it at this. And otherwise, if there's an interesting question, we can still answer it and it will be recorded. Right? Yeah. Nice. The video Ninja actually solves the separate recordings. So then I know what to do tomorrow. I'll definitely check that out. Very nice. For the pointer. I can also plug the next talk I'm giving, the PGEU system talk that it's later today. A hour and a half of the organization is everything not related to recording and video streaming as on, but more intended. It's also important, right? Yeah, it's also a big hassle. And there we've spent more, or rather we've picked up more of the automation from the Postgres people, because that's something that's sort of shared between live and virtual events. Right. Yeah, that's for sure. Yeah. Okay. I think that's it for today. I think let's call it. Thank you for the talk and for the Q&A. And then we'll see you again later. All right. All right. Thank you very much. Cheers. Bye. Bye. Bye.
In light of the pandemic, the foss-north conference has gone virtual since 2020. In this presentation we will discuss our live streaming and video recording setup built around OBS, Jitsi and Kdenlive. The talk will discuss what software we are running, how and some behind the scenes info about how we run the live events, as well as how we edit and distribute the recordings post event.
10.5446/57007 (DOI)
Hi everyone, thanks for tuning in for my boot to container presentation, which is an unit from FS that does exactly what it says on the tin. So since I'm not known in this community, my name is Martin Rochala, I am mostly active in the graphics subsystem, where I am mostly known under the nickname MuPyF, or by my premarital name Martin Paris. So I am now a freelancer working at MuPyF TMI and Valve Contractor. So my mission is to create a production ready upstream Linux graphics driver. And what does it mean? It's a lot to unpack. So let's focus on parts of it. So first, graphics drivers. Well, the point is of course to have nice looking games, high FPS, low latency, and I don't know if you look at the size of the Linux kernel driver, but that GPU kernel drivers, they are enormous and the complexity of it is insane. So okay, on this, and we've got this, then we've got production ready, which is more like the user point of view. So from the user point of view, it has to be usable. So basically fit the needs of the user, then it has to be reliable. So every time they try to use it, it works and same with available. Now upstream Linux, well, upstream is where development is happening. So from this point of view, you're going to have the best compatibility for games and GPUs. And also the best performance, but you get the worst reliability, because of course some changes do create regressions that are not caught by users yet. So on the bleeding edge, you might be bleeding a bit. So how do we actually make upstream working? Because there are some contradiction. Upstream Linux being worst reliable, and then we want something reliable. And GPUs are also complex beasts, so it's impossible to test everything. So no, I don't think it has to be a contradiction, because we could use automated testing to help this. And so people might be wondering now why I'm talking about this, that the topic is boot to container, and it's coming. Now I'm explaining the reason why I need it so bad. So automated testing for the graphics subsystem is very, very tricky. So every graphics component needs its own test environment. So there's the kernel, there is the 3D driver, there is the display driver, there is, I'm by display, the windowing driver. There's so many components. So all of them need to be tested, or even the translation later between DirectX and Vulkan. Something that is really important for just running games. So on top of this, the test suites that we have are enormous. So for instance, the one for Vulkan is getting closer to 1 million unit tests. Then the games are even harder to test, because they are designed for users, not for automated testing. And test results need to be stable, reproducible by developers when there is a problem, or not, but I guess what matters is mostly when there is a problem, so that they can just debug it. And developers also need feedback as soon as possible. So when they make a patch, they want to test it, and they don't want to wait a month to get the results. So they need well results in a matter of hours, and the problem is that the test content, if we were to use only a single machine, we would get six hours of runtime. So we need tens of machines. And since they're going to be running in reliable kernels, and GPUs are notoriously very happy to crash your system, if you look at them the wrong way, then you get some very interesting problems for automated testing. So how do we make such a CI system that would be able to deal with all of this? Well, I mean, of course, there's a lot of issues, but then what matters really in the end is creating blocks that have a very, very good interface. So the way I would say that a component is good is when you can take it out from the CI system and use it in many other places. Or like, basically, if I had another use case that would need something similar, I would want to use the disk component rather than having to reinvent my own or a second one. So the point really is that the interfaces need to be so versatile that they just solve the problem nicely. So since it is a bit difficult to explain, let's take an example that is actually closer to the container dev room. So case study, creation and deployment of the test environment. So how do we generate the test environment there? Well, there's two ways. There is the traditional way in the embedded world where you, well, that is called the root FS or generating a root FS, or then you've got the OCI containers way, which is mostly found in the web world. At least that's my understanding. And for unit testing, it's very, very good. So a root FS can be created using Yocto, build root, DevOps or any other system like this. Whereas containers are usually created using Docker, Podman, Builder or something else. So a root FS is a full disk image. So from this point of view, it is self-contained. But then that also means that if you want to update it, then it is much slower because you need to send a full image unless you're using CAsync, but let's not get there. And also it is not as portable. That means that if you have a root FS working for a particular machine, moving it to another one is not going to work nicely. It's just like on your desktop machine. If you change your system completely and try to reuse the init from FS that was created for your machine, it's likely not going to find your root partition. So on the container side, the problem with it is that it requires platform setup. That means that you cannot just boot the container. You need to take a machine, boot it and boot it directly to a container. You need to have platform setup like for instance, network or the disks. But the benefits is that it is faster to deploy because the base OS is already cached using the layers. So only the layers that changed are going to be needing to be downloaded. And hopefully that's a small amount. And then you have a high portability because containers have been designed for this from the get go. Means the same container can be run everywhere on the same architecture, of course. Now if we go back to the concept of interface, then the root FS does two things. It is platform setup and it is a shared test environment for all the other test suites. That means that if I had to have another project or another component that I need to retest, I would probably want to just duplicate the code that was there for one component and copy paste it for another one and just make the changes there. It's not wonderful. Now on the container part side, then the containers is providing an isolated test environment for every test suite. And that means that they are composable. We can run one and then the other and then it's just as if they ran for the first time on booting. Of course unless you crashed your kernel or your hardware, but that's a separate thing. Okay, so now the question is, as I was saying before, like, so how do we start a container then? Because the container requires platform initialization. So do we need to make a new root FS for this? Well, as I alluded at the beginning, no, I've been working on a project called boot to container, which is a small init ram FS that you configure using the kernel command line and it has some nice features. So first it has some network services. So it's going to get the IP from a DHCP server. So you get access to the internet. And then you also, I mean, it also will synchronize the time. So as you're not out of sync with the rest of the world. And it also allows you to have a cache drive. So you don't have to re-download the same layers all the time. This cache drive can be auto-selected, auto-formatted. And you can have a swap file also. If you run out of RAM, then you can say, well, that create a 4GIG swap file and it's going to be using that. Very simple. And finally, there's support for volumes. So volumes are just like Docker volumes or Podman volumes. They're used to share data between containers. But in our case, we can also provision the volume on startup or whenever you want. And it is provisioned using an S3 compatible storage, so-called cloud storage or... S3, back plays, anything like this. So this is pretty nice. Then we can have the volumes encrypted using FScript, which is nice if you have some jobs that need to run and store some very big files, but then they have to be private. So as some other jobs that would run on the same machine later on would not have access to them. But then if you have the key to the crypt, the folder, then you get access to the files without needing to re-download potentially a terabyte of data. And finally, you can specify an expiration time. So far, the only thing that we have is when either you keep the volume after the machine stops or you destroy everything at the end. If it's a temporary job, then that specifies a volume, then you can say just delete it at the end. Okay. And finally, the boot.to container is ready for multiple architecture. So it is based on Uroot, which is written in Go. So based on Podman, or again, written in Go. There are some C programs in there, but they're very tiny and have no dependencies. It's the ones that have then I take from Alpine, which has support for a lot of architectures. And then most of it is written in Shell Script. So how do we use boot.to container? Well, you can use it directly or netbooted, sorry. So if you want to use it directly, here is an example using QMU. You just specify kernel, the inif from FS, so boot.to container. Then we say use the kernel command line console equal to TTYS0, which means just draw on my console. And then I put no graphics, so I don't have a separate window starting. Then I say boot.to container.container, which means start a container that is going to be interactive and it's going to be Alpine. That's it. So if you don't want to run it like this, that have already have a host OS, but you actually want to boot really like bare metal, then you can also use your favorite boot loader and like grab your boot or anything else. But you can also netboot using Pixian HTTP for machines inside a trusted local network because Pixi is not exactly secure. But if you want something that is secure against the man in the middle, so for instance, if you want to boot over the internet or get your configuration and init.ramfs and all this through HTTPS, then you can use IPXE. And this is great for standalone machines on the other side of the planet. Yeah, that's it. So I'm going to make a quick demo to show how it works. So the demo has been set up like this. So first I downloaded boot to container, then I downloaded kernel associated with this release that makes it easy to test. But you can provide your own kernel and all the kernel options that are needed to make a kernel that is compatible with boot to container is they are all specified. The only quirk is that you cannot have modules. Everything has to be built in. But this is something that is going to be addressed in the future. Then I allocate the drive, which is a one gigabyte drive, super simple. And I start QMU. So I say that I want to use the disk. I want to use this kernel. I want to use boot to container. I want to have a cache device. So pick any drive that is there. What is only one? It's going to be easy for it. It's an easy job for it. Then I wanted to get the time at boot. And then I wanted to start Alpine again in interactive shell. So here we go. So the command line is here. And let's start it. So let me scroll back up. So here we have very simple Linux console. Then if we scroll down until it is started, then we see your root written in big letters. Very good. Then we see some runtime information about the machine. So it's a QMU virtual CPU for x8664. And we have 358 megs of RAM and 1 gig of storage. So then what we can see here is that it tried to find a cache partition on the machine. But since it didn't find one, then it's going to create one on the disk. It's devvda. So by creating, it just formats the drive, creates a partition, x4. And then it formats it. Very simple. Then it says, well, the cache partition devvda is mounted as cache. Very good. Then it connects to the network. So it's just that. It finds one network interface and then it starts it. It connects to it. It gets a lease. Then it synchronizes the clock using pullntp.org. It gets that in two seconds. Then finally, it, well, start the container. And here, so it has been first pulling the container here. And that worked. And then it's been now ready. So if we do an apk update, for instance, it works. So the first boot is a bit slow. As you can see, it took 20 seconds before running the actual container. But then subsequent boots are going to be much faster. So I'm just exiting. OK. And then start it again. And since it's not going to have to format anything this time, then it's going to be much faster. See, it took only eight seconds this time. And again, everything is working. OK. Then that's the end of the demo. So what if we want to have a real world idea about how it's going to look? Well, here is one. So these commands you've already seen. And this is saying, hey, I would like to register a Minio S3 world storage. So here, I just put these variables like this because they're not hot-coded. Then I want to create a volume that is called job. And I want to mirror it from, OK, the Minio instance job, which is maybe a bad name. And then the name of the bucket there that is going to be set for the job. Then I want you to, I want a boot to container to pull what is in this bucket when the pipeline starts. So pipeline is because you can run multiple containers in this one after the other. So here it's basically at boot. And I want, and we are asking boot to container to push to the bucket every change that has been done locally. So that means that if the machine dies, then we'll have the last updated state. And we say that at the end of the pipeline, we want the volume to be deleted. Then we have two boot to container calls. The first one is verifying that the machine has not changed since the last time that we booted it. So we have a database and we can verify that no hardware has changed. And then the second one is calling IGT, which is a kernel test suite for the graphic subsystem. And we just say, I want to mount the job volume to slash results. And then we're going to call the IGT runner and say you're going to output the results to slash results. And that means that as we run, we're going to have the results streaming to the bucket. And finally, we just use a serial console so we can see what is happening in real time. And that's it. Nothing too interesting there. So what are the other use cases that there could be for boot to container? The one that I can see is having a fleet of automated systems that are either local to well, wherever you are, or deployed in remote places. So net booting is feasible with boot to container because you only need to download about 15 megs for both the kernel and boot to container. And then it's only the initial download of the layers. And every time we're going to reboot after that, the layers are already there. So we don't redownload everything. Then the every boot that we're going to have is going to behave the same as if it were the first boot, which is great for testability and QA. Then that also means that we don't really need a local IT, except if some hardware is misbehaving so we can just replace it. So it's really plug and play. And for examples of these deployments, they could be public transport screens either in buses or at bus stops. Or if you have a chain of shops that could also be meaning that they don't need to maintain the machines. They just plug them to the internet and then it downloads everything. Another one, another use cases, server provisioning in the cloud, but I'm sure they have their own system. I'm sure you also know more about this than me. So this is basically if you have ideas about where it could be used or if you have plans to use it, then please let me know. So as a conclusion, our graphics CI needs where that we need reproducibility of results, environment and CI infrastructure, a test environment. We need reliability and simplicity and so having our own root FS was not going in this direction because we would have needed too many. Then butto container has delivered on these requirements and brought a bit more. It's super easy to deploy anywhere, either locally or remotely, as I was saying. And it has a low maintenance cost because if you need to upgrade it, then the only thing you need to do is bump the version. That's that simple. So the future work is that we're going to add support for the most common architectures like arm 64 and anything else that is supported by go and alpine. Then we would like to actually replace the shell scripts with code written in go. And we would like to reduce the size of the init ram FS by merging the different go binaries, especially MCI and and podman. So they would not have a duplicated code or in memory. And yeah, there's a couple more things, but that's roughly it. Here are some links if you're interested. And thanks for listening to me and I'm now going to be available for questions. Well I guess we won't know for another 10 seconds. I think we should just go for it. Wait one second. Okay, I assume now everyone can see us. There is a big delay. So this is a needs a bit of getting used to. Okay, we are in the Q&A session. Thank you so much for your talk. That was really interesting. And we have a few questions coming in. So the first one is by Daniel and he wants to know what do you use for your S3 access? So I've been using MCI, so Mina your client. And that's it. I was wondering if I should make my own or something like this to make it smaller, but MCI has been quite compressible. And my hope is that when I'm going to merge MCI and podman into your route, then all the dependencies are going to be duplicated, so it's not going to cost me anything. So that is my hope. We have another question and that's what do you think of Rust? The network stack of podman 4 has been rewritten in it as supposed to see C or for example go? Well I would prefer if it had remained in go because again, the duplication, but in boot.com container right now, I only exposed a host network because we only run one container at a time, then it makes it a little useless to have more than one network. And so if you want to make something fun here, you can have a container starting multiple containers, which is actually what we do in our CI because we use boot.com container also for our CI gateways. Cool. So after this, you can have the full podman at this point. I don't give a shit about the usage because it's not re-downgraded every time. But otherwise, whatever floats upstream, as long as it doesn't make everything too big. The next question is when is the data that is generated in the Docker volumes transferred to the S3 bucket real-time or on shutdown of the container? So you have a lot of conditions. There is one that is called pipeline start. So okay, no, two things. You specify when you want to pull and when you want to push. And for both pull and push, you have these conditions. Pipeline starts, so that's at the beginning when booting. Then container start because you can run multiple containers one after the other. Then container end, well when it's done so between stages of the pipeline or pipeline end, and then you have changes that's going to be like MCLI-watch. So whenever there's a change that is local or remote, then it's going to sink. So you choose. There's no, I didn't want to hard code it for my use case. Instead I just specify it in the command line. That's the theme. I think we have two more questions that I currently see on the screen. Hopefully we can get through them. And afterwards, if you have more questions, you should join the private chat room. I think that opens up to the public and continue the discussion over there. So the next question is, do you do much QMU testing with boot to container, PCI pass through and so on? I did not. Our objective has been always real machines. So we are using real X86 machines and soon we'll add support for arm and other things. So yeah, that's what it is. And how do you manage the kernel version to test? So basically now this is something more related to orchestration. What needs to be booted or not? And I can link you to the so-called valve infra, so the VALVGRAPHIC CI infra. And basically what you see with boot to container, the kernel command line, then this is something a bit more expanded in a YAML file that would show exactly how to deploy things. And I can show you after. Yeah. Okay. I think we are going to be cut off in about four seconds. So thank you very much for the talk. Thank you to Q&A session in the private chat room. Thank you.
Fed up with managing your host OS for your docker environment? Try booting your containers directly from a light-weight initramfs! Flash a USB pendrive with the kernel and initramfs, or netboot it locally or from the internet, configure it from the kernel command line. Bonus: It also supports syncing volumes with S3-compatible cloud storages, making provisioning and back-ups a breeze! Containers have been an effective way to share reproducible environments for services, CI pipelines, or even user applications. In the high availability world, orchestration can then be used to run multiple instances of the same service. However, if your goal is to run these containers on your local machines, you would first need to provision them with an operating system capable of connecting to the internet, and then downloading, extracting, and running the containers. This operating system would then need to be kept up to date across all your machines which is error-prone and can lead to subtle differences in the run environment which may impact your services. In order to lower this maintenance cost and improve the reproducibility of the run environment, it would be best if we could drop this Operating System and directly boot the containers you want to run. With newer versions of podman, it is even painless to run systemd as the entrypoint, so why not create an initramfs that would perform the simple duty of connecting to the internet, and download a "root" container which can be shared between all the machines? If the size could be kept reasonable, both the kernel and initramfs could then be downloaded at boot time via iPXE either locally via PXE or from the internet. This is with this line of reasoning that we started working on a new project called boot2container which would receive its configuration via the kernel command line and construct a pipeline of containers. Additionally, we added support for volumes, optionally synced with any S3-compatible cloud storages. This project was then used in a bare-metal CI, both for the test machines and the gateways connecting them to the outside world. There, boot2container helps to provide the much-needed reproducibility of the test environment while also making it extremely easy to replicate this infrastructure in multiple locations to maximize availability.
10.5446/57008 (DOI)
Hello everyone, my name is Rafael Fernández Lopez and I'm a Senior Software Engineer at SUSE. And today we are going to talk about extending QnRs with WebAssembly. So first the following is WebAssembly. It is a binary instruction format. It allows us to build once and run everywhere. It is a standalone. It allows us to store metadata inside of the binary. And it is a target in multiple languages. So we can use the language and the toolchain that we are reading now on Lof. So we are going to extend QnRs with it. But how exactly? So one of the main components of QnRs is the API server. The API server reads and stores information into a distributed key value store called HCD. So we as users reach out to the API server. Other components might as well, whether they are internal or external to the cluster. So let's look at the internals of the API server. Every time a request comes into the API server, it gets authenticated and authorized. Then this object inside this request could be mutated. Then it gets validated against the schema of the object. And then it gets validated. If everything is fine, it will get stored into HCD. So what you see here as webhooks is a concept of QnRs called the dynamic admission control. And so we are able to register in which type of operations, in which kind of objects we are interested, and in which webhooks we want to target with them. So the API server will perform an HTTP request every time a request like that comes and will call to the right webhooks in order to mutate or validate the request. And so we thought, what if we could use WebAssembly to write this small amount of code that is able to mutate or validate code, validate requests, instead of Kubernetes. And so Qwarding was born. With Qwarding, we are able to define policies. And so these policies are able to mutate or validate requests. So how to execute policies? Instead of Kubernetes, you deploy the Qwarding controller. And by defining policies that target different kind of resources, different operations, you are able to either mutate, accept or reject. And so policies get executed inside policy servers. By default, there is only one policy server, but you can create as many policy servers as you want. And so you can target different policy servers from within different policies. So you might want to create one policy server per tenant. This is completely up to you. And so this is how it works inside of Kubernetes. You can also run policies outside of Kubernetes. So you can check if they behave as you expect, whether you are developing a new policy or you are using one that you found. And so you can run KWC, you have available the KWC TLD CLI tool that you can run inside your laptop, your desktop, so you can run policies offline without the need of Kubernetes. So running is only part of the problem. The other part of the problem is distributing these policies. And so you are probably used right now to OCI registries because you are using OCI registries to distribute container images already. So the good news is that you can use this same OCI registry to also distribute policies as long as the OCI registry allows the OCI artifacts. And so now how do you serve policies? We have created the policy hub in where you can find different policies so you can discover them and use them as you need. And now let's look for a short demo on how to run policies. And here we have a policy that will reject services that are of type log balancer. So we are going to pull this policy, you can think of this as podman pool and podman images, but just for policies with KWC TLD. And you can see now that we get the list, just the policy that we just pulled. We can inspect the policy, remember that this is inside of the binary, this information that flies with the policy itself. And we can see a request that should be allowed. In this case we are creating a service of type cluster IP. So this should be fine. Let's run that with KWC TLD and we see that this was allowed. So this request would be allowed if we deployed this on top of the cluster. Let's look at another request. In this case it tries to create a load balancer service type and this gets rejected by the policy. You get that the message comes from the policy itself. You can see more commands for KWC TLD. You can inspect policies, you can annotate them, you can pull, you can push to an OCA artifact. You can also verify them by using SIGSTORE. So you have different options over there. So join us. Everything is being done in the open. So you can go to our GitHub org. You will also have a Twitter account. You can go to the Q&A. This is like the Fisial 1 or the Renser users one. And we also have official websites. So you can look into that. So that's all from me. Thank you for joining me today and have anything else. Enjoy the rest of the event. So everyone, welcome to the Q&A for Extending Kubernetes with WebAssembly. I see there are currently no questions. Rafa, maybe you can tell us more about this. Yeah, absolutely. So one of the things that I wanted to also add to what I have already said is that you can write policies in the languages that you want. Right now we have SDK. And by SDK, I mean like a library that you can use to write the policy. It's super simple. In Rust, Swift and Go. In this case, you have to use TinyGo as the compiler because the official Go compiler from Google doesn't allow you to build a standalone WebAssembly module. And so you have Rust, Swift and Go. And we also have another policy, for example, within the assembly script. We don't have an SDK for that. But if you are eager to write it, you are absolutely welcome. And also besides that, you could also write basically policies in a language that you can use with WAPiC. And so this is what we are using in order to call policies from the policy server on KWC.tl. And so with this WAPiC procedure call, you write basically your policy. And so you could, for example, write also policies in SIG if you were interested. We don't have SDK for that either. But yeah. And besides that, we also have plans for extending QBorden. So you can also have background checks. Like for example, I have something created in the cluster. Everything is fine. It passed all the policies. Then I deployed a new policy afterwards. And this is actually kind of breaking, I have resources that are already persisted, but they are now breaking the new policies. So you have background checking in order to tell you about this. And also you have another, we are working on another model in which before you actually deploy policies into the cluster in production, you are able to see if they were about to break something that is not yet on the cluster. So for example, I could deploy a policy in this mode. That is just like a notifying mode. We don't have an info yet. But the idea is that before I deploy this policy in production that could potentially break stuff. I deployed it in this model, maybe notify, I don't know. And then you see what it would have rejected. It won't reject, but you see what it would have rejected or what would it have mutated. So you have this information in your logs and you are able to deploy policies in this mode without the need of actually running them like in production, taking actions or rejecting, accepting or mutating requests. And besides that, if you also want to create policies, you are absolutely welcome. We have about 22 policies right now in the Q-word and hub. This is where we serve policies. People can learn about new ones and deploy new ones, publish new ones. So you can use that. So take a look at that and create policies if you are missing something over there. And with that, I think we are right out of time. So thank you, everyone, and enjoy the rest of Fuzzland. Thanks, Rafa. Thank you. Thank you.
WebAssembly is a portable binary instruction format that was originally created with the browser as the main execution runtime. However, during the last years, WebAssembly is finding its way also outside of the browser because of the many benefits it provides like portability, security and flexibility. We think WebAssembly can be leveraged by Kubernetes in many ways. This short session will focus on how WebAssembly can be used to write Kubernetes admission policies. We will show an open source Kubernetes Dynamic admission controller that uses policies written in WebAssembly to validate and mutate the requests made against the Kubernetes API server.
10.5446/57009 (DOI)
Hello everyone, thank you for joining the distribution dev room and welcome to my talk about the CentOS stream and how it works. My name is Alexandra Fyodorov, I'm CI engineer at Red Hat, I work on CI for Fedora, REL and CentOS stream. You can find me on various channels if you want to learn more, but let's get started. In last year we saw many news and conversations about CentOS stream and many of them started with a picture which looked like this, where Fedora, REL and CentOS are represented as boxes connected with arrows. And I cannot say that this picture is wrong, I think there is a way to interpret it, but I think it is very misleading. For example, when you link boxes like this with arrows, you implicitly assume that relation between Fedora and CentOS stream and REL are similar. And as we will see in the talk today, this is actually very different. So in my approach to this topic, I try to represent Linux distributions not as boxes, but actually as lines. Because we all know that Linux distribution is not just the set of data on the ISO which you use to install your system. It's also the updates flow, the channel or the stream, which can be very different from distribution to distribution and actually creates the main value of using Linux distributions over just copy pasting some content from places on the internet. The rules how these updates channels function define what this Linux distribution is. And that's why I prefer to show the Linux distribution on diagrams not as a box, but as a line. And it also can be multiple lines. For example, here on this diagram, I have drawn the chart of Fedora distribution, where I tried to explain the rate of changes over time. And you can see that Fedora internally has several branches, whereas the Fedora Rohide branch which goes on and on and gets most recent, latest and greatest pieces of the distribution software. And then we have stable branches which we create at certain points and then which go through certain development phases, through freeze, through release and then all the way to the end of life. And this approach I will apply to the center stream today. And let's see how it works. So let's consider first the bootstrap. Bootstrap can mean different things for different people, but in the way we talk about Linux distributions, bootstrapping means how do we create that initial set of components, initial set of tools and initial built environment which are then used to build everything else and build this constant updates flow. We need to start from something and nowadays distributions rarely start new versions from scratch. They usually use something as a base on which we can build upon. And that is exactly the case in the center stream world. Center stream bootstrap story is the story about Fedora. So here is the picture again. I know it might be a bit overwhelming, but it's actually quite simple. So this is how the bootstrap, how center stream starts. First we have the same Fedora-Rohide branches we saw on the previous slide. Fedora-Rohide is a flow of updates which land in Fedora repositories and get the newest versions of everything. And then we have Fedora ELN. Fedora ELN is was a recent concept and we introduced it as a rebuild of Fedora-Rohide sources in the enterprise-like build route. So we take the same sources of the same Fedora-Rohide packages only of subset of them, not all of them. So Fedora has more than 20,000 packages and Fedora ELN is about several thousand, maybe three or four. And then we rebuild them in the build environment which resembles in a way the enterprise Linux environment. Then the bootstrap event happens, if you see on this diagram, it happens in a very relatively short amount of time and it happens when a certain Fedora stable branch is created and it happens before the moment Fedora branch is created and the moment Fedora branch is freezed and ready to be released. So when you see how Fedora and CentOS Stream are related, while Fedora is critically important for CentOS Stream and REL and it creates the foundation on which CentOS Stream is built, the actual technical interaction between Fedora and CentOS Stream packages and components and updates happen in this very, very short amount of time, again, relatively to the life cycle of the CentOS Stream itself, only within that bootstrap moment. And so what happens in that bootstrap moment is that we take a certain snapshot of the Fedora ELN at this moment and we use it to start building CentOS Stream packages and while Fedora stable branch is being developed, we continuously sync Fedora updates into that freshly new created CentOS Stream build environment and rebuild it with the same sources as CentOS packages. So within this bootstrap time, CentOS Stream doesn't have its own content, all the content it has is Fedora sources, sources from a stable branch, which are rebuilt in the enterprise-like environment. And then when Fedora reaches its development freeze before the release, this is the point where Fedora is going to be locked down for the release itself and for the stable updates. And at this moment, we have a point where CentOS Stream breaks the sync from Fedora and becomes its independent content. So CentOS Stream gets the possibility to actually change stuff in the packages. Some components break this synchronization earlier, some still continue fetching certain changes from Fedora stable branch later, and maybe some continue to get new changes from other sources. But basically, after development freeze of Fedora, CentOS Stream becomes fully independent and even it has no link to the exact Fedora updates. Fedora updates go in Fedora, CentOS Stream gets its own patches and contributions. So having this picture in mind, let's double check. So the common question was, I saw several times, was does CentOS Stream now replace Fedora? I hope now you can answer this. The answer is no, because looking again at this picture, CentOS Stream development starts when Fedora development ends. And there is basically no overlap except a bootstrap phase. So CentOS Stream originates from Fedora, but it in no way replaces it. So yet another question then, which people ask is, is CentOS Stream now a Fedora LTS? Well, we can say sort of, but really no, because, and again, it is important to understand that bootstrap of CentOS Stream is not just taking RPMs from Fedora and continuing the path, continuing updating them. Bootstrap and CentOS Stream means taking sources from Fedora, reworking them, and then building them in a different environment with different build configuration, with different macro, with different set of content, different build flags. And while we use Fedora sources to create this initial CentOS Stream content, it actually is a separate distribution. So there is no easy continuity between Fedora stable branch into CentOS Stream. The jump between Fedora stable branch and the beginning of the CentOS Stream is really the gap between them is too large. So now what CentOS Stream Bootstrap does, I added some notes here to clarify. So we use Fedora ELN, snapshot of Fedora ELN as a starting point. We consume updates from Fedora branch until Fedora development freeze, rework them. We only work on the created subset of packages, it's like order of magnitude smaller than in Fedora. The subset packages get trimmed dependencies and build flags and options are different via set according to Red Hat Enterprise Linux business requirements. And yeah, we do several masterables during the bootstrap phase. So nothing of Fedora packages is taken as a binary package included in CentOS Stream. No, there is always a rebuilt process changing sources, changing build configurations. And important to also note that this synchronously bootstraps rel. The rel was bootstrapped from Fedora several times already for each of the major releases of rel. And what makes this whole process new is that we've sent to Stream 9, this process became public. So we now see it in action in all public resources, so you can actually see how the created subset of packages is built. You can see how the packages appear in the CentOS Stream code. You can see the evolution of the build route, how this goes through certain changes to get to the right state of distribution. And yeah, this is the benefit which CentOS Stream brings is that during the bootstrap process you can actually influence how the project is bootstrapped. You can bring your use cases and add or remove packages from that created subset. You can discuss the build flags and you can participate in deciding how the next years of the CentOS Stream are going to work. So now once we figured out the bootstrap part, how CentOS Stream starts, let's look into how CentOS Stream goes on or being developed. And Linux distributions are developed by updating packages. So the development of the CentOS Stream, or REL, and REL, means developing the updates of packages. And as I mentioned before, when we were talking about bootstrap, this was the main point where CentOS Stream interacted with Fedora. Once we go into the development and updates process, we don't talk about Fedora anymore, Fedora is the upstream which then targets the next major release. Of course, it is important to contribute to this because the next major release also needs to use Fedora as a foundation. But the updates themselves and the development of the bootstrap CentOS Stream and REL are not tied to Fedora updates in any way anymore. So the updates story is story about CentOS Stream and REL. And as I mentioned, development and update of the Linux distribution means development and update of a package. To on a very high level, the path of the update of a package in CentOS Stream looks as follows. So you need to merge the update of sources, a patch, the new source code, new spec file into CentOS Stream GitLab. Then you need to build it in the CentOS Stream build system, which is CodeG. Then once you have a binary RPMs to work with, you need to test them. And then once you have the RPMs of the proper quality, you can publish them to a repository to the ISO image and so on. This is a high level picture, slightly more detailed picture in the path of a package I have drawn here on this slide. So I want to highlight here the separate levels of checks which the package goes through. So as I said, we first need to merge the sources to GitLab. And to merge sources to GitLab, we have checks on merge request. So all contributions to CentOS Stream goes through OpenPool requests, merge requests in CentOS Stream GitLab. You can comment on them. You can also see how CI is triggered on them. So we run some sanity tests, some also functional testing and some additional verification on merge request before this change is merged in the CentOS Stream sources. Once the change goes into GitLab and gets merged into the CentOS Stream 9 branch, we built a package and we built a binary RPM out of these Git sources. The binary RPM we built first lands in the Git tag, as we said. So it's tagged into a pool of packages which are ready for verification. Then once the package is built in the Git tag, it needs to pass certain package level checks, so functional testing, sanity testing, whatever levels of testing are available. And then once package is verified, we put this package into the pending tag. So in the pool of packages which are ready to be composed. After package lands in the pending tag, we have periodic compose procedure which creates repository and images and the boot ISO, DVD ISO and all those familiar artifacts out of the pool of packages from a pending tag. Then this repository and those images need to pass compose checks and then they can be published to the actual CentOS Stream mirrors. So these are like three levels of checks. Each check is an interesting topic on its own which we can talk more, but let's proceed and take a look into where is well. So if we look into the sources, the GitLab sources, the RPM specs, the files in the diskit which CentOS Stream uses, these are exactly the same sources as the REL sources. The contribution to CentOS Stream, when it's merged to CentOS Stream, it is a contribution to REL. That's why the ultimate action of merging is controlled by REL development. So anyone can submit the merge request, but accepting this merge request means accepting it to REL. So only the REL developers can actually do this acceptance of the merge request. Now once the sources are accepted in the diskit, they become the REL sources and then the build process happens. And so in the build process, we have two synchronized build procedures, one for CentOS Stream, which is happening in CentOS Stream Codege, and one is mirroring these exact steps, but on the REL side, which is internal REL infrastructure. So we built simultaneously REL and CentOS Stream binary packages from the same sources. These packages pass the package checks synchronously and they synchronously land in the pending tag. And then the third part of this process, the composing process, this goes independently. REL has its own compose process and compose verification and shipping rules and schedules. And CentOS project has compose and compose checks and mirrors on its own. So if you're interested in what kind of rules are governing the publication of CentOS Stream Composers to mirrors, you can come to CentOS community, you can bring your ideas, you can bring your gating checks if you like. And this whole process of publication of images and repositories needs to be discussed with CentOS board and it's rolled over by CentOS board. Now on this diagram, I'm trying to take a little bit of a closer look into that middle synchronization part to explain how the joint synchronized gate with package tests actually work. So as I mentioned, we built simultaneously, we built two packages, one binary package we built with CentOS Stream and the other binary package built from the same sources. At the same time, we built for the Red Hat Enterprise Linux. Both packages appear in the gate. And these gates don't work independently. These gates can have different tests and we usually have more real tests, they're running on the internal side of infrastructure. But the test results of these tests, we are considered together. So CentOS Stream package can pass to the next step, only if its real corresponding part also passes its required tests. These go through all gating steps only together. If one of them fails, nothing gets promoted to the next step and nothing gets shipped to CentOS Stream or to REL. And worth mentioning also that the entire process on this picture, even though it's unsynchronized between CentOS Stream and Red Hat Enterprise Linux, it is actually performed by REL Development and Quie, because REL delivery depends on it. So it's managed by REL qualification and REL engineering. With this schema of a pipeline in mind, let's look into the common question people ask on various platforms, is CentOS Stream like Debian testing? No. The closest concept in Debian, possibly, is the Debian stable proposed updates repository. I bet you didn't hear about it before. But there exists such a repository and it is similar in terms of that it creates a channel for updates which are expected to be shipped in the next minor stable release in Debian. But it's worth to note that Debian doesn't have this similar package level verification and package unsynchronization in that verification process. So while it is your closest bet in the Debian world, it's actually still not the same thing as the CentOS Stream as I described before. And now the story of CentOS Stream updates wouldn't be complete without talking about how CentOS Stream relates to minor releases of REL. So in this picture, I use my favorite change over time graph, but now I have tried to explain how REL development and REL shipping are different and where is the place of the CentOS Stream here. So as I mentioned, REL packages are built at the same moment in time when CentOS Stream packages are built. And they passed the gating of the verification at the same time the CentOS Stream package are passing this verification. The difference between CentOS Stream and REL in this case is that once REL package passes the verification, it lands in the REL nightly, it is not yet shipped to the REL mirrors. So REL shipping looks more like that red line down there which has these steps for each minor release. So we release a certain minor version of REL, it gets some updates but very small number of them waits for some time and then with the next minor release, it does a jump to reach it to accumulate all the packages which were updated during that time. Then again, it waits for the updates for some time and then it does the next jump and so on. REL development branch which is CentOS Stream actually gets these updates all the time and releases them as they are ready as they pass the verification including the compose verification. And many people were concerned about how this affects their compatibility but looking at this diagram, we can see that CentOS Stream actually doesn't run away from REL. It is tracking REL development branch but REL development branch goes on within the REL EBI compatibility and it doesn't fluctuate around this that much. We keep collecting packages for the next minor release, that's what we're doing in the development branch and that's what CentOS Stream is tracking. So again, usual question, does it mean that CentOS Stream is rolling like Gen2? No. CentOS Stream is stabilized on the state of the major REL release that was released at. And once it is done, we do the updates, we continue updating the CentOS Stream and REL but we're continuing to update it only within this state which we set when we were doing the first major release. The REL compatibility applies to CentOS Stream the same way as it applies to each REL minor release. If you're interested in more details of how this works for REL, you can take a look at this link and see certain levels of compatibility which exist. Now we had many questions, we saw some questions there and you may be wondering now, what is CentOS Stream like then? And the thing here is that yes, CentOS Stream is a new concept and it's quite a unique concept. It is no surprise that we try to explain it via familiar terms and try to find analogies in other places where we know some examples but in fact CentOS Stream is like CentOS. Now it's stable, it's continuous, it goes together with REL and it's synchronized with REL and that's how it works. So maybe in some future years rather than trying to find something what CentOS Stream is like, we will be talking about CentOS Stream as its own model which worth maybe looking at. And as a bonus slide, maybe slightly better boxes. If you're still stuck with the box diagrams and arrows there, maybe this will help you like the arrows have meaning and the Fedora to CentOS Stream arrow is bootstrap which happens once per major release and CentOS Stream to REL is the interpoint synchronous updates of the development branches and that's about it. Thank you for listening, I'm looking forward to questions and of course reach out if you want to talk more. And I was muted so I'll repeat again. As I said, welcome to the live streaming part of this and I'm just going to the questions and answers and yeah, if you fall asleep during the talk, now it's time to wake up and start asking your questions. And the first one I see is how do we get in tests in CentOS Stream compared to the ones in Fedora CI. If there are additional tests in stream, can they be contributed back? So this is an awesome question and there are two kind of notes here. First we keep CI systems in Fedora, Red Hat, Enterprise Linux and CentOS Stream very much the same. So basically if you figured out how to run tests in Fedora, you know how to possibly be can be ported to CentOS Stream and the opposite. If you know how to run tests on REL, you can run it on CentOS Stream and you can run it on Fedora. But this is on the infrastructure side. We try to maintain the infrastructures in synchronization. When we talk about like the test content itself, currently there are much more tests we run on the internal part of the REL infrastructure. But historically these tests, these infrastructures were built over years and we are just starting the CentOS Stream part and we're just like a CI for Fedora relatively recently. So not all REL internal tests are available in Fedora or CentOS Stream right now. So to maybe repeat that slide, let me find the one with the synchronization. So you can see here why it's not such a big of an issue for CentOS Stream right now is that this REL part of the testing, REL part of the CI actually participates in testing CentOS updates. In the way that if internal part fails, the CentOS Stream update will not pass through as well. But we are working with REL QE folks to also upstream in tests and making all of them public and open source. There is literally the upstream first for tests initiative in REL quality engineering right now. So I hope in the future you will see more of the REL internal testing published to the infrastructure to the of Fedora in upstream projects and to CentOS Stream. The other question, more of a comment I see is that on one of the Debian part where I was probably misinterpreting some of the Debian process, thank you for the comment. So the comment says it's not actually, so I said Debian stable proposed updates, repositories used for creating minor releases of Debian. But the comment is that it's not actually minor releases of Debian, minor releases of Debian created from stable repo, not from the proposed updates. So then I would say that CentOS Stream is actually even closer to just Debian stable repository in this situation, but it's actually, yeah, I'm not sure how it should be mapped properly and maybe, yeah, as I said, we just should stop trying to map CentOS Stream to something else and just take it for what it is CentOS Stream. And I wonder if there are any other questions. Okay. Yes, I do have a REL9 t-shirt. REL9 is coming and we are preparing for that. I think actually if you're, yeah, just personal recommendation, take it for what it is. If I were starting a new project right now, I would really start it based on CentOS Stream 9 and save myself from doing one unnecessary major upgrade. Of course, if I had an infrastructure already like Taylor to CentOS 8 and REL8 and whatever, I would probably continue using so. But for new projects, new infrastructures, I would consider already hoping on CentOS Stream 9, getting used to it and then like flowing into REL9 release and everything else. Okay. Any other questions or just join me in the chat for the talk. I will be sitting there waiting for more questions if you come with more or just for a generic discussion on CentOS Stream. Neil, I will send your question about REL9 swag to Red Hat Management and we will try to figure out something for you. I will figure it out. Okay. Anything else? Okay. What I can maybe talk more about? What else we have? Yeah. Fedora ELN 10. That's maybe worth reminding especially. Like Fedora ELN doesn't have a version. Fedora ELN is something which goes together with Rohide on and on and on and on and take the latest and greatest. So basically, Fedora ELN already bootstraps the CentOS Stream 10 now. So if you're looking forward for the future, this is where you can participate in shaping what REL10 or CentOS Stream 10 are going to be in the future. That's already started. So I know some folks are still like surviving CentOS 6 end of life, but we're actually going to also REL10 quite soon. I mean, a couple of years soon, maybe that kind of level. Okay. Okay. And honestly, as a continuous integration engineer, I always felt that Linux distributions are the ultimate challenge of continuous integration. And when I see this kind of flow of Fedora through ELN to CentOS Stream to REL, this really pleases my CIA engineer inside me seeing how this flow works. I know not everyone is on board with the idea, and this is perfectly okay. But I think it's worth having this as an option, having this as a place to look at at least, and maybe figure out what's your future plans around this, what are the variables going to be and how it will work. So we're closing up the live streaming session. And again, welcome to the other room.
CentOS Stream was introduced in September 2019. In December 2020 it made news, raised a lot of questions and created long hand-wavy discussions and confusing arguments. During 2021 CentOS Stream 9 finally has found its place in the RHEL 9 development process. And now, in early 2022, we can take a good look at how it actually works. This talk is focused on the development process of the CentOS Stream distribution. We are going to talk about bootstrap, package updates, continuous integration, testing and contribution. We welcome distribution developers, but also users which are interested to know what's hidden under the hood of a typical enterprise-level system.
10.5446/57012 (DOI)
you you can power pop culture. So, you can try to walk down the grammar lines and talk about some of the things that have been written into this Hello everyone. Today's talk is named, Build and Release Tools, tailored to building, releasing and maintaining Linux distributions and orcs. Firstly, I'm going to introduce myself and my team. I'm going to talk about Rocky Linux is, how we import sources, how we build packages, and why we built our own meta-orchestrator. How we compose releases and how we inject a lot of information, and also how we want to and what we want to do going forward. My name is Mustafa. I'm the Release Engineering Co-lead for the Rocky Linux project. I'm also a software engineer at CIQ, which is the founding sponsor for Rocky Linux. I'm also studying health technology at QIT Arctic University of Norway. I want to talk about my team at RUSA, the Rocky Enterprise Software Foundation. Louis is my co-lead, and he's basically running all the day operations. If you subscribe to the Rocky mailing list, I'm sure you've seen his name. Skip is our troubleshooter. He helps us prepare for major and sometimes minor releases with his build passes. Sharif is our secure boot slash shim expert, and has been leading the efforts since the start. Skip and Sharif are both deputies for the team. Pablo is our advisor, and he's been involved with the CQAS project for a long time, and has a lot of wisdom to offer. Neil and Taylor are our infrastructure people, and we're making sure we get what we need as fast as possible. We also have a lot of other contributors that are contributing patches and contributing to tooling and scripts. So what is exactly Rocky Linux? Rocky Linux was announced by Gregory Kirchner after Red Hat announced the end-of-life or CentOS Linux. Red Hat only intends to maintain and release CentOS Stream in the future, and to replace CentOS Linux, Rocky Linux was born. CentOS Linux is a downstream rel clone and ensures binary compatibility as well as the same support cycle of 10 years. CentOS Stream, which is the replacement Red Hat intends, is a continuous rel variant, or rather the new rel upstream. It's an exciting and new direction for this CentOS project, and it also opens up for more community involvement and opens up the years-long internal development cycle for rel, and it's a very welcome addition to the EL distribution space, and it's also viable for most use cases, although there are some use cases where a continuous distribution like Stream may not be viable, and that's why we're building Rocky Linux. CentOS Stream and Rocky Linux should be used where it fits, and it's a very acceptable solution to use Stream on one server and Rocky on another, and we recommend everyone to use what works best for their environment. Let's get technical. So how do we import sources to build a distribution? We need sources for the packages, and especially for a clone. We should need exact sources. All rel sources are published at githousentos.org, Red Hat has amazing procedures that ensures a consistent and stable experience for clone maintainers and users. Variants are also stored in the same repository, which makes it easy to get a copy of Stream version as well as a relevant version. And Red Hat is, even though CentOS Linux is discontinued, Red Hat still publishes the sources for stable rel. We can do it in port with a simple script, but we wanted a smooth and stable version, a simple script, but we wanted a smooth patching experience, and we also needed a way to populate module component revisions. So that's why we built SRPM Proc. SRPM Proc pulls the latest tag. It can also pull older tags. It applies patches, and also translates module component revs into revisions. So to understand SRPM Proc, I think we first need to look into OpenPatch. So the OpenPatch architecture may seem familiar to many CentOS and Fedora maintainers. We use the same RPMs and module subgroups, but we also introduce the patch group. In the RPMs subgroup, we store the source RPMs in SCM format, which is the same format RPM build requires, but also the format that CentOS stores them, or Red Hat stores them. In modules, we store the module definitions, sources, and the translated documents with hashes populated with the components, component imports in RPMs. So all module components should first be imported into RPMs before the module document itself is imported. The patch group contains directives to modify the source RPM repository before it's pushed to version control. So what is OpenPatch directives? And directives are a way to specify actions to apply, or rather specified transformations to source RPM repositories before it's pushed to version control. This makes it possible to have consistent patches, which won't break with every new release, and maintainers can use a broad selection of directives to de-brand fix up or patch packages as necessary. For example, to the left here, we see the NGINX directives, which replaces the HTML pages with a de-branded Rocky page, or with the branded Rocky pages, and also adds a change log entry. On the right, we see the old.netify.0 directives. We used it to build on Rocky. Here's a simple example of how we can invoke SRPM Proc. So we supply a package, the look aside, which is a storage to store source blobs, or source tar balls, or something that is binary. I shouldn't be stored in a git repository. We support, or SRPM Proc supports S3, GCS, and local storage, and production. We use S3. We have to specify an upstream and diversion. We can see here that this command pound a C8, and a C8 beta branch invoking with stream mode would return C8S and C8S beta instead. Also changing the version into seven would import the version in row seven instead of row eight. And this makes the tool general purpose enough to maintain multiple major releases at the same time. And as a matter of fact, Rocky Linux source control can currently support importing and patching multiple major versions at the same time. We have imported the packages, so how do we build them? We use Koji. Koji is the build system that CentOS and Fedora uses. Builds are triggered on build hosts, and they fetch these source RPMs from source control, and invokes a build using the mock tool. The mock tool creates a new jail for each build to ensure consistency. And Koji then puts all artifacts in a central repository for a specific tag, and a flat structure, and makes the build artifacts available for future builds. However, unfortunately, Koji has no native support for modules, which forces us to use MBS. MBS module build service is a separate service that integrates with Koji. MBS reads the module metadata from the source control and initiates a build following the module document regarding build order and division. Each module stream has its own build target and isolated tag in Koji, which makes it not mix with the central repository. Modules are then kept in a central registry and for future builds or future module builds, and we use the previously built modules as dependencies. So we have all the tools, but how can we use them? Should we use them manually, or do we want something to tie them together? So in our mind, we wanted a web-based solution with access control. Koji's web UI wasn't viable for us in that regard, and MBS didn't have a web UI. And also to ensure quality, we also wanted to constrain the maintainers since importing and building a known package list, which we can determine from real primaries. We wanted maintainers to be able to import and build packages from that list, first import them into our source control, and later trigger a build in the correct system. Normal packages should be sent to Koji, while modules should be sent to MBS. We also wanted to account for packages that were both packages and modules, and we even have some packages that are normal packages, our modules, and module components. So this is where Distrib-Build comes in. Distrib-Build is a web service that's currently in use, which allows release engineers to sign in using the Iraqi account, browse package lists, import and build them. They can only build packages which was imported through Distrib-Build and they can't make direct changes to revisions themselves. All patches should be checked in before import, and this all makes it possible for us to have a trail of patches and imports distributed in itself as a build tool, but calls out to Koji and MBS. For example, the row about the package list shows that we successfully used Distrib-Build to trigger the first ever Rockville Linux batch, which built 1,724 packages in our first ever build batch, which comes out to almost half the packages in rel 8. I think I should probably do a small demo of an import and a build using Distrib-Build. So let's sign in here. So let's do a simple import here. So we're going to import the batch package, and we have already built this version, so this import should be empty, and we can trigger a build. This build will cancel and fail since we already have built the specified NVR, and we can't build the same NVR multiple times. So this should fail. After the SRPM is built, we can also follow logs and progress in Koji. So here Koji is calling into SRPM proc wrapper to fetch blobs from Lucaside. Task has completed, and it has failed as expected, and with built already exists. So let's get back to the presentation. So now we have imported and built packages. So how can we compose repositories? We're using the Ponji tool, like CentOS, to compose releases. Ponji takes in metadata and comps, and spits out repositories and runs create-rebo-c over that. Ponji also pulls in module metadata and combines that to one big list and adds that to the end repositories. We're using the same comps and metadata, CentOS and REL. We also developed a tool called SecPars that we use to generate a router information. SecPars pulls Red Hat APIs and checks if there are any new entries. New entries are then matched up to source RPMs we have built in Koji, and the artifact names from Koji are then stored in SecPars. After we compose, we run a publisher tool to determine what artifacts are in repositories that were affected by a given advisory and add that to the Arata metadata. Current shortcomings of this tool, it's tied to CVEs and components where a CVE is not marked as fixed before all components are fixed. We're restructuring that to be based on package and not CVE directly. This way, we will be able to publish advisories for fixed packages before all affected packages are fixed. We're also working on separating multi-package affecting CVEs being listed in each other's advisory as an affected product. So now let's talk about next generation of tools. We wanted to work on a tightly integrated ecosystem of services to consolidate imports, bills, and releases, and we wanted it to be under one umbrella, and we know how to call it the Paradot ecosystem. We only want to rely on a single system for all builds, including major versions that we will maintain at the same time, SIGs and eventual auto variants. Our current Arata tools are applied post-compose and are not as integrated into the build process as we would want them to be, but with Paradot, we aim for more advanced and accurate Arata integration. We also wanted to work with a multi-package and we also wanted to work with a modern tech stack. So what do we mean by modern tech stack? Sorry for going off of those words, but we wanted a cloud-native solution. We wanted to ditch NFS. We wanted to be Kubernetes-native, and most services should be stateless. We are also big fans of temporal as a workflow orchestrator, and you should totally check them out. To ditch NFS, we created something called YonRepoFS, which is a virtual repo tree that is managed without any great repo tool. And also enables us to have structured repositories. With Paradot, we're moving away from the flat-cogie structure into maintaining the repository structure after each build to look just like the production structure. Paradot maintains the structure with all necessary metadata, such as module documents and Arata. Leasing a new point release or an update is then just a single button press where we can re-export current metadata and sync packages as needed. I'm going to do a small demo of Paradot and YonRepoFS. So first, I'm going to transition to the YonRepoFS screen with an error. And we see here that we have no repos present at this time. And we're first going to do an import for the sub-packages here. And then we're going to do a sub-package. And then we're going to do a sub-package. And then we're going to do a sub-package. And then we need to import the main module here. Okay. Now we can trigger a new HTTP build. And we see the same SRPM stage as in Koji. And the SRPM stage is done so that all architectures can build on the same SRPM. Now we're building the first component. HDPD is a module with build orders. So we're building the base HDPD component first. And as for sake of the demo, tests are disabled. And as for sake of the demo, tests are disabled. Thank you. you The first component is now built and Parodot is now triggering the other two components. We can see here that it's reusing artifacts from YumRepoFS, which is also where I can show it here. It's rolling past. So we see now that the build has succeeded. The module is now built. So now when we refresh the upstream repo in YumRepoFS, we see now new entries in the XML. If we take a look at the primary, we should see now entries for HDPD and the modules. And you see that the release contains a module tab. So to install this, we also supply a module metadata. So the module metadata contains defaults. And here we declare the HDPD module and its stream and which artifacts and RPMs it should cover. So goals. Our most important goals was however to be able to enable access to the build system for different segments. We wanted the central location for the sales to live as well as maintaining multiple major versions in the same service. The most important goal was to be able to automatically rebuild CentOS stream and we use NVRs present in the released well minor release. We could determine the NVR shortly after release and simply just reuse whatever we already built, which means that we could be, we would be most prepared after a new release props. This would further help us achieve the same goal of same day minor releases. And that's it for today. If you're interested in what we do, we would love if you joined us at chat.rockylinux.org. I also listed various sources for various components that I have talked about and there will also be some time for live Q&A at the end of this presentation. So please feel free to ask any question, things and have a great day. So soon we're online now. Hello everyone. Thanks for attending talk. So we have two questions now. Kenneth, thank you. Let's see. Is there a link available to the mock tool you're using for the build jail? Yes, mock is actually a red hat tool. And Koji is actually calling out to mock to build the packages. Thank you for the question. The second question is, what extent do you take measures to prevent supply chain attacks where malicious people try to inject backdoors or whatever in the rock. You know, the rockylinux packages and encode or binary scanning tools to use. So yes, when we clone sources, we check hashes just like it's defined in the center or a real source or sent to us, it doesn't was.org repo. We confirm the hashes during import as well as during build and also all patches or all changes that's going into our own packages like the branding. They all have to be checked into a repository beforehand and reviewed before they get it. And does that answer your question, Kenneth? I also see a new question in the distribution step room. Are there new mentioned tools already available as a source somewhere very soon? We are organizing it in preparing it for open source and they should be available pretty soon. So, Kenneth, another great question. I think Gerger Kirchner has been involved with the HPC community for a long time. And I think that my player role, or it might just be a coincidence. It's usually just whatever it's picked up and becomes standardized. So other than that, there shouldn't be any other reason, no technical reasons, please. Yes, Lumeral either in GitHub, Rocka Linux or we also use the GitHub organization very frequently. So the sources usually appear either there or on GitHub. Let's see. Another great question, Kenneth. So the differences, technical differences and result of Rocka Linux and Alpine Linux shouldn't exist. It's both red hat clones. So it should be compatible with each other. The differences probably lies in the community who is engaged in the community and the workflows basically. Other than that, it shouldn't be much different. It's just whatever you choose to work with basically. Are there packages that are in Rocka Linux but not in Alpine Linux and other way around? No. So the base OS should be equal to RHEL. We may have Alma or Rocky may have SIGs that they extend the OS with. But that's outside of core. With the core OS, core distributions should be equal. Thanks for the question, Kenneth. Another great question. So I'll be waiting a few more seconds for some other questions. So if you're so close, why not work more together on tooling? Isn't that a base of effort? So I think it's a good thing that we can have multiple approaches to tooling. Almost tooling may have some advantages that our tooling don't have and the other way around. So I think it's a good thing that we can develop separate tooling and even just copy each other basically and learn from each other. I don't see two different segments or two different clones existing as a waste. I think it's great with choice both and distro and with tooling. And it's also interesting to see new approaches that we didn't think of before. So I think I don't think it's a waste of effort. Thanks for your question, Kenneth. I'll wait just a bit for some more questions. We have five minutes left. So we're slowly getting started with SIGs. So anything that's not in REL, we wouldn't include in base in the core OS, but we do, we want to use SIGs as kind of extensions to the OS. And with new tooling and more access control, we gradually want to open up the tools for maintainers and contributors. So I think just joining the MatterMos on chat.procurelynex.org and getting engaged with the SIGs, maybe even requesting a new SIG should help you out getting started with packaging new stuff. Thanks for your question. So if something is relevant for Apple or is accepted in Apple, it's always better to get it into Apple. So both Rocky, Alma, Stream and REL can use it. So if something can go into Apple, it should go into Apple. And as Neil said, it's better to just put it in Apple. Thanks for your question. I think we have three minutes left. So we still have some time for some more questions. I appreciate your questions, Kent. And what way is the mock tool specific to RPM building? Can you say a bit more about what it does? Could it also be useful when building stuff with other tools? So the reason mock is preferred is because it creates a new truth to build a package. So there's no interference with the system packages. So you can, I think, almost guarantee or always guarantee that a build is in a clean state. So that's why it's preferred when building RPMs. I don't think it's, I'm not sure. I don't want to say anything that's not correct. So if anyone knows, but I don't think you can build anything else with RPMs with mock. But I'm not sure. So if anyone knows the right answer, just write it in chat.
Maintaining a Linux distribution in a consistent and secure manner is challenging. Maintaining a one-to-one clone, can be even more challenging. Rocky Linux maintains a number of in-house tools to aid in this process and makes it as transparent and auditable as possible. Rocky Linux is aiming to be an exact RHEL clone. When the project first started out, the landscape of tools to automate imports and orchestrate builds across package types were not widespread. First challenge to tackle was imports and patches. Srpmproc was introduced to facilitate upstream imports with consistent automated patching. Distrobuild was later introduced as a meta-orchestration layer for already existing build tools within the EL ecosystem. We're now introducing Peridot, the next generation cloud-native build and release tools for RPM distributions.
10.5446/57013 (DOI)
So one button back from that and I will speak today about automatics, CPU and non-opening. It's a new variant of that. So we also go and first them we used to know that the new VM type performance it was useful for CPU intensity and focals especially sub-hANA VMs. This VM type automatically configured some VM properties usually is how to configure such as making the VM AdList, dropping the USB controller, so more stuff. But it wasn't complete, you still needed to do some manual modifications to get a real benefit of the high performance in terms of CPU. So a little bit about CPU and its topology. We have the CPU which is basically split into sockets and within the socket you have the cores which are the processors and each one of them can be split into threads. We don't deal with Dyson or BERT and as far as NOMA is non-uniform memory access, so which NOMA node has separate CPUs, memory controller and memory, IO controllers and devices. It's measured in terms of locality and usually each NOMA node has one socket, so which CPU and core basically assigned to local memory to use which make if you configure it right in pinning specific memory, local memory physically closed to the physical CPU use in terms of performance and spasso. Here it's how we configure CPUs over it. So it's a string, we specify it in VM edit configuration. It's pretty difficult to understand, difficult to write. You can limit the physical CPU to one or more physical CPUs and it basically reduces the movement of other processors. And here is an example of a CPU pinning string. You can see it's not very easy to read. It means for example that the virtual CPU zero is assigned to physical CPU three and two is assigned to physical CPUs one or two and so on. And there are limitations of using this method. It's a static configuration. Once you edit on the VM, it requires to pin the VM to the host. These CPUs are shared. This means it's not exclusive physical CPU to the VM or the virtual CPU you pin. It means that other VMs and processors can run on the same physical CPUs. So this is a string, meaningful pinning for a number of VMs. On that host is a tedious task and as you can see, when it's a VM with many virtual CPUs and the host with many physical CPUs and you wish to pin it, maybe for one VM, it's fine. But when you're doing it multiple VMs, it's starting to be the harsh to do. So for the high performance VM, there is a manual procedure, basically guidelines, sub-HANA VMs in order of defining the pinning. And here is an example of the manual pinning of this. So you select the host and you get the its topology. Once you get the CPU topology, the NUMA topology, then you change the VM CPU topology to fit this host topology. It means, for example, if you add a host with one socket, three cores and two threads, then you set your VM with one socket, two cores, which is one core less and two threads. This is the resize process and that dropping core is basically to let the host enough risen space and for high performance VM you also pin the IOPRAD and emulator usually to the first core. So this is the idea behind it. You change the first one NUMA to fit the host physical NUMA in terms of numbers and then you run the script on the desired host. It generates for you the CPU pinning string based on the NUMA nodes and the host topology. So it pins according to the socket, of course, and because it's script that it's only support some topology, not all of them, then you need to copy the output of this script with the CPU pinning string into the VM configuration and pin manually the VM NUMA to the physical NUMA. In other info, we introduce a new feature which was CPU NUMA auto pinning. We assigned the CPUs based on host topology. We had one policy resizing pin that resizes the VM topology NTV NUMA nodes based on the previous manual procedure for SAP HANA. It was effective on PM edit, which means when you set it on the configuration and click OK, then all the static fields of the CPU pinning, NUMA and so on are set. And it did not change on VM start. What's configurations that keep going as it's part in the VM static? A bit of CPU pinning policy. So we introduce a new configuration as a VM CPU pin policy. The resized and pin option does as I said the manual procedure automatically in a willful. We announced the support for example the script supported on the two threads, Apology on the Austin. We example have one thread topology supported as well now. And in the future we plan to make it generic for another number of threads. And we had the same limitations that you need to pin as a VM to one of most. As well we introduce another policy which was called PIN. The PIN policy did not change, it didn't do the resize part. And that you get the CPU topology that the VM was configured with and the algorithm ran to the Austin. Basically get you the best pinning we can with the current CPU topology. We had one major flaw that we use the same physical CPUs on the Austin for multiple VMs. In example when you ran two VMs and the Austin for sockets and you HPM was supposed to use one socket each, it will use the same socket which is not good leaving the second socket free without any or in the CPUs. So at the moment we are discussing an alternative solution to add back this policy. Is there using as a feature of dedicated CPUs which I will talk a bit later. We will use Shell CPUs as well like the resizing pin but changing the algorithm to fit and decide which physical CPUs are free to use and not using the same. Here it's just the view in the UI. How it can be easily configured. You can just edit the VM, go into resource allocation, then just setting the CPU pin policy to resizing pin number. As opposed to the API is pretty simple. You just need to provide the CPU pin policy with the desired. Here it's an example of how we do the resizing part. So for example in this we have a host that you will see in the future in the next slide. It says two sockets, three calls and one thread. And we have the initial VM with one socket one quote one thread. And we just increased the VM topology to have two sockets as host as and two cars instead of three we dropped one and one thread as well. And we also set the VM with two normal nodes like the host has. And there is how the pinning itself is done afterwards. So after we increased and resized the topology we now pinning it. So as you can see we leave the call zero three which is the first call and it's socket. And we pin each call to the physical call accordingly call zero in the VM goes to call one call one to call two socket zero to socket zero basically. And the same for the second socket one. We also pin the number zero to number zero physically. The CPU ping for example the simple one is zero to one one to two and so on. This is a high level example. It's a pretty simple one. Once you add the threads and so on it's becoming a bit more complicated and pretty long. This is ensuring that the virtual CPUs use a virtual nummer and it's pinned to the right physical CPUs and the right physical nummer and basically getting closer to bell metal in terms of CPU because you're using the totally going to the physically and using in the same local place so this is the idea. And we also while doing so fixed in creating splitting of the CPUs to the nummer nodes over generates the CPU set to the normal beyond sense. And with the previous algorithm a code can be divided into two different nummer nodes. It causes a problem within the guest you could get the CPU topology is not be the one that you're actually set to the VM configuration. And that's what we want in terms of performances. I said earlier in terms of normal you do want the course as a threat to be in the same call and the same call to be in the same call and that number. It's an example of how it's done in the previous algorithm. You can see that we had eight CPUs VM with one socket and four cores and two threads and we had three virtual nummers. So the current algorithm just did the number of CPUs. We divided with the virtual number counts which means eight divided by three. And once we have a reminder we'll try to just add one virtual CPU until which nummer until we didn't have more reminder lefts. And now the algorithm is trying to pick up and the right nummer grouping to the threads to the calls to get the same CPU core into the same nummer instead of splitting. It would be better in performance wise and also better in terms of not misleading the underlying voice in the guest and getting other topologies expected. And in OVR45 a new feature is coming called dedicated CPUs. And all of the pinning is required for that. The new policy will make CPU pinning exclusive. So each VCPU will get exclusive OVR physical CPUs and other VCPUs won't be able to use it, which means that each VM with this CPUs can get its own physical CPUs, other VMs won't be able to use it. The same physical CPUs, of course, processes and something else that running on the host is able to use it, but not our VMs. The effort was to make the CPU pinning policies similar to that of OpenStack. And based on that it requires CPU assignment on runtime. And there we get a little bit of the chicken and the egg problem. So the older size and pin flow, it was fairly simple that you just need to pin the VM to the desired host and select the policy. And once you click OK, the engine sets the CPU topology, the CPU pinning strings, the normal pinning, all the pinning itself and set it into the static configuration of that VM. And now in the new resized and pin flow, we select the policy for the VM and we don't set anything. And once we do that, we run the VM and the engine selects a host for us. And only then the pinning is set. So this is coming back to the chicken and egg problem. We do all the validations and results handling by the static configuration, but in this flow, as we wish to do it in run phase, there is no static configuration regarding that. We don't have anything and we don't chose a host yet. So we had a problem and we need to calculate the intended configuration and save it in to a special place in order to validate and schedule the VM on the host. And only then to know what we're currently using. Once the VM goes down, we need to reset it. And of course, we drop the limitation of we don't need the VM to be pinned to host. So we basically ended up on setting it as a dynamic properties that can be changed. And we check it on run phase and calculate what needed in order to make things work and to be aligned with the CPUs. So what is next and what is left? So of course, the pin policy, which is under discussion. The page configuration, which completes the high performance configuration. It's a problematic configuration because we don't know the user requirement for and the host use and it requires preparation to have enough page set with the current with the outside size in the host, it can run the VM, which we don't want it to happen. One gigabyte each page is scanner migration flow in the converge part. So currently we don't do it automatically. I will hold links for the dedicated CPUs policy and post them high performance VM. And thank you all and I am ready for any questions. Thank you. Hi. Auto-pianing games can migrate. They may require the same previously if you use the old flow and 4.4 then it might be a problem because they need the same hardware in between those hosts. But now with the run phase, migration actually can work because it's recalculated when the VM is starting. And I will repeat the questions answers that cannot open VMs migrate with the VPN in the destination host and what happens if the do not match the destination host. If my presentation about the specific performance calls then auto-pianing is shared CPUs and it consumes all the host physical CPU hardware basically. So if you use VM with high workload of intensity of CPU it will consume your host. So running more VMs will cause less effective performance. For SAPANA basically it's recommended to use one VM such VM on the host. I think this is it mostly. You don't even need to pin the VM into host now with the current implementation. One question. Okay. Is the auto-pening feature overt specific or would it be available on plain Linux distro with Levered Gamma or KVM for instance? Yes, it's overt specific. We do all the logic and algorithm in overt in the manager, so it is specific to overt. Just to add up, of course you can do it manually when you configure your VM and running commands. Okay, I guess there is no more questions and the time is up soon, so thank you all for joining and listening. I hope it would be useful for you. And see you all. Thank you.
In FOSDEM 2019 we presented the addition of high-performance virtual machines in oVirt. With this new VM type, parts of the VM configuration were changed to improve the performance of workloads it runs. In particular, it was useful for CPU-intensive workloads, such as SAP HANA. However, better performance came at the expense of usability. Users were still expected to set various things manually, like CPU and NUMA pinning and hugepages. In this talk, I will guide you through our journey of simplifying and automating the settings of high performance VMs in oVirt. We'll see the evolution of the changes, the challenges we faced, where we are today and what's more to come in oVirt 4.5.
10.5446/57014 (DOI)
Hello and welcome to this presentation of Verifiable Credentials and Decentralized Identifiers with DidKit. I'm Charles E. Lainer, working at Spruce Systems, Inc. and this is FOSSTEM 22-22 Web3 Dev Room. Verifiable Credentials and Decentralized Identifiers are two technology standards being developed at the World Wide Web Consortium, the W3C. Verifiable Credentials are a recommendation of the W3C and this is what the specification looks like. It's for a kind of data model for verifiable information, cryptographically verifiable or otherwise, and it has special formats for indicating specific things. The main data structures are a verifiable credential and a presentation, or a verifiable presentation. Here is an example from the specification of a verifiable credential with comments. You can see the at context for the use of JSON-LD for semantic extensibility, and then there are the required fields and optional fields, ID type, issuer, identifier, issuance date, and the credential subject. The credential subject typically is where an entity is identified that the credential is issued to, and then there are other pieces of information that are called claims that are assertions about reality, about the subject, or about the issuer, or about something else. Then there's the proof property, which is usually a cryptographic signature over the rest of the credential. Here you can see the metadata, the signature proof type, and the signature data. The centralized identifiers are a URI scheme and data model that's a W3C proposed recommendation, and the specification looks like this. A DID is a URI that looks like this, and the first part after the scheme is called the DID method. In this example, this would be the example DID method. The rest of the DID is up to the DID method specification. DID methods did resolve to DID documents, and a DID document example is here. The DID is in the ID, typically, and then the other information is in the rest of the DID document. You can see the use of JSON-LD again, and then this example contains an object for a public key or a verification method, as it's called, for the purposes of authentication. DID, KIT, and SSI are implementations of verifiable credentials and decentralized identifiers and other technologies. These are being developed at Spruce, and that's what I'm working on at Spruce, largely. The structure of this implementation is that SSI contains the core library in Rust for the functionality, and did, KIT embeds the SSI library into various interfaces. There's a command line interface, an HTTP interface, which follows a pre-standard that's being developed at W3C, a credentials community group, and there's interfaces for other languages, C, Java, Python, WebAssembly. There's native libraries for Android and iOS. There's a Node.js module, and a Flutter library, and there's also a GoLine. The did, KIT command line interface looks like this, and here's the links, and the other interfaces, the bindings to other languages, have more or less the same functionality, that is, to issue and verify verifiable credentials and presentations, and resolve dids, and do some key material operations and other utilities. So I'm going to try a demo, did, KIT, now. This demo is a multi-part demo to try to demonstrate the different roles that are involved. In, in verifiable credential issuance and verification. So often, there are considered to be three roles. There's the issuer, who issues a credential, and then there's the holder, who becomes the holder by receiving a credential that they are then holding, and then there's the verifier, who the holder presents their credential to, and then the verifier verifies it. So starting with the issuer, good start. We have a public private key pair here, and we can use didKIT to generate a did that's derived from that key pair. Using key to did for the key method, didKIT and SSI have implementations of various did methods and extensibility to use others, but the key method is built in and allows deriving a key, a did from a key pair, as well as some of the others. So we do that, and we get the did here. Now in the credential that we're issuing, I've started it out here. This one is pretty bare bones. There's, normally you'd have some meaningful information in the credential, but this is trying to be a minimal instance just to demonstrate this functionality. So we put our did in the issuer field, and also I'll note that verifier credentials don't have to be used with dids. You can use HTTP URLs, HTTPS, and other schemes, but when you combine it with dids, you can get some nice properties. So there's the issuer ID. We can put a date in this date field issuance date, and then the credential subject, this signifies, when there's an ID here, it signifies that the credential is about a particular entity, and in this case, it's going to be identified with a did, and then later on, the entity that controls that did will be able to present that credential and prove cryptographically that they possess the key material that is corresponding to that identifier. So we pretend we received this from the subject or the holder already, and it's in there. Now issue this credential, it's did kit, try the command line, passing the credential and standard input and passing the key pair and redirect this to a file. That seems to have worked. Now let's take a look. And you can see that this is a verifiable credential. The information from before is still there. The proof property has been added, which contains this ED255.9 signature over the credential and the metadata. So now we would, as issuer, give this to the subject. In this case, it's a, it may be a person or entity, and they're going to receive this credential. So I'm going to symbolize that by moving it into their directory. Now pretend we're the holder, and we go over there and we've received this credential, and we want to verify it first to make sure it's verified, verifiable, valid, and all that. And we can do that with did kit, and that shows the proof check passed, and there's no errors, so it verified. Now as a holder, we will want to present this credential to another entity who is called the verifier. And to do that, we have to wrap the verifiable credential into a verifiable presentation. So I've started one here, this is an empty verifiable presentation, it's not yet verifiable, it's just a presentation, and it's empty, so I'm going to add the verifiable credential in here, in the verifiable credential property, and then in the holder property, I would put my did as the holder. And in case I didn't have that already, I could use the same command as before passing my key, key, and that's our ID, and I put that in here, the holder property, and now this should be able to be issued, or generated as a presentation, and again using the key pair, and saving that to a file. And that seems to have worked, so we would pass this to the verifier. Now normally there would be an interactive verification process usually, where the verifier might provide a challenge, and a domain property, and then those would be incorporated into the presentation. But for right now, we're not using that in this demo. But we can verify it as the verifier now, by passing it in, similarly to with the credential, and that worked. And then also as the verifier, we would extract the verifiable credential from it, since this is not automatic and did kick currently, and verify that separately. Oops, forgot to pipe it in. And that worked. And then again as the verifier, we would do additional processing to verify that the subject ID is the same as the holder ID, and that the proofs are proofs that we want to allow, and then things like that, and then look at the actual data as well. So that's a verifiable presentation. One more other thing we can look at here is verifying a did, and that would be done with did resolve. Sorry, resolve a did, not verify did. And that returns a did document, looks like this, for did key anyway. Okay, so that's that. Back to the presentation. So I wanted to mention some other implementations of verifiable credentials and decentralized identifiers. These are a few of them, and there are others as well. But Transmute, Digital Bizarre, Denive Tech, Decentralized Identity Foundation, Hyperledger, and Uport have implemented various parts of this technology. And there's test suites and interop activities that make sure that they're interoperable. And I also want to mention some community and working groups. The decentralized identifiers and verifiable credentials have working groups at W3C that standardize them, and these have public mailing lists as well. And then there's the Credentials Community Group at W3C that I want to highlight, which has public meetings and it has an active email mailing list. And often on the meetings there are interesting speakers and discussions that take place. And there are also different work items that happen in there that are pre-standards that later on maybe they'll go into a standards working group. At Decentralized Identity Foundation or DIF there are working groups as well, and also there's Interoperability Open Group, which can be interesting to check out. And for people who want to make sure that their implementations are interoperable and explore different topics and issues. And Identity Workshop is a conference that takes place multiple times a year, and that's a good place for meeting people and working on this kind of technology. And Trust Over IP Foundation is also involved in things, and so is OpenID Foundation. So if you'd like to chat with me or other people at Spruce, we have a matrix room, an IRC channel, you can email here, and there's also a Discord. And if you want to contact me you can find me on Matrix, IRC, email, PGP, secure Scuttlebutt, and on the web. So thank you very much. Great, now we've got 30 seconds while streams catch up. Hello, hello. So the stream's probably caught up now mostly for buffers, so thanks everybody for listening or not, and for being present. And I'm going to mute that. So Juan Caballero is here as a co-presenter, not listed on the page because it was too late of an update, but it was listed in the description. Juan, do you want to say anything? Sure, no, I was just going to say that someone asked in the chat what the best case scenario is and what the primary purpose use cases are. And I think one way of explaining it just as succinctly as possible is that a did is like a universal format or like an envelope or a wrapper for enough PKI information, like the minimum amount of information you need to get and verify someone's PKI. So it's like if it's like a registered entity, it tells you where to go to get the public key, right? So it's a sort of a translation layer for PKI systems, which can be blockchain or not, can be peer-to-peer or not. So really the use case, I think, for dids is that they're just a way to make that stuff travel better. And same with VCs. I think VCs just make an atom of factual information or authoritative information more portable. They're kind of portability-oriented envelopes for data. That was a lot more than 15 words I failed. And as I was saying, as a kind of federation mechanism, I think it could be potentially useful for systems like all the many ones that have been shown today that could use it to interoperate with other systems in certain ways. Does that make any sense? Are there other questions? I think some people are typing in here. Yeah, I mean, I think in some ways portability and translation are assumed to bolt on later. And this is sort of a bolt on for everyone who just built something quickly that just takes advantage of all the built-in PKI and verification of a blockchain system. So a lot of times people building, say, for the purposes of MVP, for the purposes of just building the thing, will just assume every user has an Ethereum account because it's pretty low bar, people can spit up a bit of a mask in five seconds, or just assume, like, just use the central SPKI for now. And I think for, I'm hoping that a lot of open source projects just build against this as a starting point or as a quick way to get from that simplified MVP scope to something that can be a node in the full-stem circuit quickly using this stuff. Like a bridging tool. Right, bridging. It's probably worth mentioning, of course, in regards to the best, I read the question as best use case, but verifiable credentials are maybe probably the main driving one or a big one right now. But best case scenario is, I think that depends on philosophies, but if you think that identifiers that should be decentralized, then it's a good case if it's integrated and adopted into more systems, I would think. And a lot of systems can be using public keys as identifiers, or they're using public key hashes of identifiers. What I did is like a generalization of that into an identifier that resolves to a document that contains verification material, and now I guess to a set of public keys like a PGP key ring. But that's considered the core kernel of what you need to be able to interact in a verifiable way, i.e. through making signatures and interactive cryptographic protocols like secret handshake or libPP handshakes and things like that. And I think in a lot of use cases, in a lot of applications where like content authenticity, for example, if you want metadata to always be linkable, always be findable, right, there's a big problem in content authenticity or big data systems where you can assign metadata to a thing, but you have to make sure that metadata is always findable from the current state of the thing. So dins are a way to do that. You can sort of anchor things in these immutable and or persistent sticky content identifier based mechanisms so that as long as the thing exists, the metadata is findable, the metadata is secure for how to trust it, how to initiate a handshake with it, what languages it speaks, that's a really appealing thing to people who've worked with big data a lot. Well, in the last minute, I just wanted to say great presentations to everyone else. I was watching a lot of them and it's really cool. And I hope these systems succeed and do the best that we can, that anyone can to improve things and all that. Yeah, I hope next full-stim I'm in central European time and not road tripping. Apologies for scheduling it wrong full-stim. I'll see you next time. Yeah, we'll see you all. Everyone's welcome to join the chat.
We present DIDKit, a toolkit for Verifiable Credentials and Decentralized Identifiers, implemented in Rust. This includes an introduction to Verifiable Credentials ("VCs"), a W3C Recommendation; to Decentralized Identifiers ("DIDs"), a Proposed W3C Recommendation; and to the signing formats used with these: JSON Web Signatures (JWS) and Linked Data Proofs. Charles Lehner will demo using DIDKit to issue, present and verify verifiable credentials using DIDs based on cryptographic keypairs. Charles is joined by Juan Caballero (@bumblefudge) for the Q&A.
10.5446/57015 (DOI)
Hi, I'm Drew DeVault and I'm going to introduce you to a project called CUBE today. CUBE is a backend for compilers. It serves a similar purpose as LLVM. It has an intermediate representation, which is the domain specific language it uses. It's a simplified way of writing programs that higher level compilers can translate their source code into. It uses this language and takes advantage of this simpler form in these stricter semantics that it uses, like static single assignment, to optimize the program and then output machine code. CUBE is an optimizing compiler backend and its stated goal is to provide 70% of the performance of advanced compilers using 10% of the code. Today it supports x8664, ART64, and RISC-564 and it does so and about 14,000 lines of C99 code. The input to CUBE looks something like this. I'm not going to spend too much time on the intermediate representation that it uses, but just to give you an idea of what it looks like, here's a sample. This is a program which prints 1 plus 1 equals 2 and we can see here a couple of things. We have function definitions which include in the function definition the signature, so w here is short for word, and then we have a couple of temporaries, a and b, in the add function and the signature defines the a, b, i of the function which is based on the system v, a, b, i, the same a, b, i that C uses for x86. CUBE provides interoperability with other C compilers based on the same a, b, i. Then we have the main function here which calls this add function, calls printf and returns 0. Then we can compile this function to get this machine code. I'm not going to go into too much detail on what any of this does, but if you understand x86 assembly, this program should look fairly reasonable to you. If we compile this into an object file and link it to libc, we can see that it does what it says on the 10 and adds 1 plus 1. On top of CUBE we need to add front ends to get something more useful. One of the most sophisticated front ends that is based on CUBE today is Michael Forney's C proc C compiler, which is a self-hosting C11 compiler based on CUBE. Self-hosting meaning that it can compile itself and C11 meaning the latest version of the C standard that has been ratified. It supports some C22 features, but they haven't finished C22 and we haven't finished implementing it for C proc either, so that's something for the future. But C proc 8000 lines of C plus CUBE's 14000 lines of C is enough to get a compiler which can build some serious real world programs, including GCC 4.7, which was the last version of GCC to be written in C and not in C++, plus other stuff like Ben Utho's U2 Linux, Bear SSL, Get U-boot, and many, many others as well also work with C proc. There's a link on the slide here to a list of programs that are used in the Oasis Linux distribution, which is designed to use C procs as its main compiler. And this list includes a list of programs which they have been able to get working with C proc, some of them needed patches here and there, mostly to make them conform it with the C specification because C proc does not implement a whole lot of GCC extensions. And other things that it does not implement include variable length arrays, thread local storage, position independent code, so no shared libraries, or inline assembly, plus a small handful of other features are not present. But C proc is one of the most sophisticated non-mainstream compilers available today. And it does so in a very small amount of code. Here is an example of using C proc. It's very similar to the earlier example, but I just wanted to share here a sample of what the code looks like generated for a very simple C program. So at the top we have a Hello World program and then I have this incantation to make C proc emit its cube IR, which looks much like the previous sample of cube IR. But how is the performance? So the claim is that they want to achieve 70% of the performance of mainstream compilers like LLVM and GCC in 10% of the code. So the first claim I want to look at is this 10% number. And here we can see based on whatever version of LLVM and GCC I happened to have checked out on my workstation at the time, which was probably less than a year old. We can see how many lines of code are present in each of these projects. And it probably hasn't changed much since anyway, so I imagine these are fairly accurate, especially because they're rounded. Anyway, LLVM, about 10 million lines of code, GCC about 9 million lines of code, and cube is 14,000 lines of code. This is a substantial difference in the complexity of these projects. In fact, it's not just 10% of the code, but 0.1% of the code. So if it can achieve the 70% number in 0.1% of the code, that would in my opinion be quite a profound achievement. So the question is if it does. So I'm going to demonstrate this by compiling the Barris SL library right now in front of you. But before I do that, I'm going to bootstrap the entire tool chain. So I'll start by timing how long it takes to compile cube. 1.68 seconds, and we'll run the test suite as well. And that took 2.97 seconds. Then I'm going to compile Cproc. Let me time it. OK, Cproc took 1.19 seconds to compile, and then I'll run its test suite too, and that took a fraction of a second. And then just for fun, I'll compile it with itself. And that took 0.56 seconds. So in the past 15 seconds, I have compiled from scratch, including the test suites, a working C11 tool chain. Not including Ben Uthels, but the compiler in the back end. Anybody who's watching this who has bootstrapped LLVM or GCC from scratch doing a full three-stage build understands that this is a lot easier than that. And I spent maybe a week trying to bootstrap LLVM for risk five on Mosul Lib C several months ago. And a GCC build can take as much as three hours to do a fully bootstrap build, sometimes more, and that's on good hardware. And LLVM can't even compile without a substantial amount of RAM and other resources. So this is not just a lot less code, but a lot less code compiles a lot faster. So it's a much smaller and simpler system, which is a lot less of a pain in the ass to get working. But now that we have a compiler, let's build some real world code with it. So how long does it take to build bare SSL with C prompt? 15 seconds. So I know from experience that it takes about twice as long to build it with GCC. We could also run the test suite, but the test suite takes a while and this is a lightning talk. So I'm going to switch from demo back to slides where I just have the answers written down here. So the build time for clang to compile bare SSL, five seconds on my workstation at home, GCC four seconds, C proc one and a half seconds. The time to run the test suite on each of these was about 45 seconds on clang, 43 seconds on GCC and 62 seconds on C proc, which is actually 73% of the speed of the mainstream compilers. So that's pretty close to the goal. It's in fact exceeds the goal a little bit. But bare SSL also has a less favorable test that we can run, which is its test speed test. But before I show you that 73% the runtime performance, 380% the compile performance. So just running the compiler is much, much faster and it does this in a tenth of a percent of the lines of code required. So that's pretty good performance. So here's the bad news. It's not so good on everything. So I ran the bare SSL test speed, which is a test which is specifically designed to evaluate the speed of bare SSL's cryptographic implementations. And I ran this on my laptop. No, I'm sorry. I did run this on my workstation. So I ran this on my workstation, which is a nice AMD CPU. And the numbers are less favorable. So we can see here, for example, that GCC does 295 makes a second for SHAW 256, but C procs only manages 159. And so these numbers are worse than 70%. But I want to point out that this might not necessarily be a deal breaker. For a lot of use cases, these numbers are fast enough. Like for example, SHAW SHAW, if you're using TLS 1.3, SHAW SHAW is probably being used as part of your crypto system for TLS. And if your network connection is less than 109 makes per second down, then this is not going to be the bottleneck. And maybe this is sufficient. But in any case, it is less. And that's not so good. However, another benefit that can help you deal with these tight loops is that a lot of this stuff is not actually run in the compiler. The C compiler is not usually responsible for AES, for example. On my machines, we use the AES NEI instruction set, which can do 10 times faster than GCC can do. So really tight performance stuff maybe is not so much of a concern. What I think is going to be more frequently the case is that you're going to see something closer to the 70%. But if you want to see the hard numbers, I have the logs, the full logs for Barrissa cell here. Some of them are misleading, so I didn't include everything here. But if you want to browse it yourself, feel free to take a look at these logs. So based on this, we might have to adjust the performance to anywhere from a quarter to three quarters of the runtime performance, definitely better compile time performance and a fraction of the code. So in terms of portability, this is another place where we need work because LLVM and GCC support a huge range of targets, which is just really the product of a lot of labor and CUBE doesn't have as much labor. But the good news is that it takes a lot less work to add it back into CUBE than it does to add it back into one of the other compilers. So x8664, 2,000 lines of code. AR64, 1,600 lines of code. Risk5, 1,400 lines of code. And risk5 is the newest one. It was written in 341 commits by Michael Forney over the course of eight months. One person is enough to port CUBE to risk5. So if you want to out of port yourself, it's not that hard. I mean, it's a project. It takes time and work, but it's not nearly on the scale as it does for something like LLVM. And a solvent in C, not in C++, which is an advantage for me. Some ports we have in the future will probably be PPC64, power 9, which will give us a chance to develop big indie on support. Another major limitation of CUBE is the lack of any kind of 32-bit support. And implementing 32-bit platforms will require a great deal of refactoring. So these will be more challenging than other ports, but we do want to take these on. So ideally we'll be working with I46, RMHF, Risk532, maybe a few others. If you have any targets that you need, it would be great to have you come and help develop those as well. And then additionally, there is work underway to add support for the Plan 9 flavor of assembly in the output, which will be very convenient if you want to have an alternate C stack on Plan 9, especially given that the Plan 9 compilers have a lot of lacking C11 support. So bright future in terms of ports for CUBE, but today are a little bit limited. So that's CUBE. CUBE is a compiler that I think is really interesting. It is very simple and very small, but it manages to achieve a level of performance that I think is acceptable for most workloads. And I think that's pretty impressive, especially given how small it is in comparison to the major compilers. That complexity reduction is really important. It means that it's more reliable, it's easier to debug, easier to understand, easier to maintain, lighter weight, less hard drive space. I think those add up a lot. And to me, it's worth it to lose a little bit of that performance in exchange for that greater degree of simplicity. But that's it. I hope you'll check out CUBE. Thanks for listening. Hi, thank you for my talk. There is a long question, what makes people with lines of code better? Surely all those LLVM lines of code are important features? That's a good question. I would say that when you get to a certain point, you start to see diminishing returns as you add more code. So if you want to get to the long tail of optimizations, with a goal of something like LLVM is to get as many optimizations as possible, the more optimizations you want, the more code you have to write to get them. And so you get less important optimizations, but they start to build up. And a lot of that also just comes down to the fact that LLVM has a much more complex design, which is designed to do a greater variety of things than CUBE is. But those extra lines of code do something, but they do less and less with each new line of code. Thank you. Is full ISOC99 support? Plenus, does it support C6C source? So this is a question about Cproc. And I think that VLAs are not, I think VLAs are C11, but in any case, VLAs and features like them are not so much planned, but they're also not unplanned. So if somebody wants to come along and implement those features, I think Michael would take them. And there have been some discussions around how to do them. It's just not as important. It is technically an optional feature in the specifications, so it is still a conforming compiler without them. But in order to compile some software, it would definitely be helpful. Thank you. And that's CUBE support, threat local storage and full ABI compatibility with the primitives for the x8664 system. I think that compatibility with the x8664 or system V ABI is all there, or at least I don't think there's anything that CUBE can do that is not represented by the ABI. There might be some features of the ABI that are supported by CUBE, but all CUBE programs are conformant with the system V ABI. As for TLS, again, just like VLAs, I think that's possible, but it requires work and nobody has come along to do the work yet. I think that would be a useful one to get a missile live-sea to compile, but not yet. Can you be used as a library to a long-term cogeneration? I think there is plans, some vague plans to make CUBE useful as a library. It would probably be fairly easy to refactor it for this purpose. So I think that's an accepted feature request, which has yet to be implemented. Are there any questions you have? Our dark wants to support WebAssembly as a target in the future. I don't think that there are any plans right now to support WebAssembly. I actually think that because WebAssembly is a stack-based language and most CPU architectures are not stack-based in that same sense, that it would be one of the more difficult targets to add. But I think it would at some point be possible to implement WebAssembly, but nobody is working on it right now. Thank you. Will CUBE be used for the compiler of your new programming language? Yes. Oh, okay. I want to ask the questions. How much of the code bases of LLVM and GCC are target-specific? I'm not sure off the top of my head, but I think it's somewhere between 10,000 to 100,000 lines of code per target. But don't quote me on that. I think that's all of them. See you again.
qbe is an optimizing compiler backend which consumes programs in a simple intermediate language, optimizes them, and emits assembly for x86_64, aarch64, or riscv64, aiming to achieve "70% of the performance" of advanced compilers like LLVM in "10% of the code". This talk will briefly introduce qbe and its intermediate language, explain how it works and what it's capable of, and go over some sample programs which can be written in it.
10.5446/57016 (DOI)
you Hi everyone, my name is Ruth Cheesley and I'm project lead for Mortec and today I'm going to be speaking for about 40 minutes about our experiences of implementing an incentivised partner programme within the Mortec open source community. So for those of you who don't know me, my name is Ruth Cheesley, my pronouns are she, her. I work full time for Acquia as project lead for Mortec. My background is around about 18 years of using and contributing to different open source projects and my base is in Ipswich in the UK. If you want the slides from this or any of the links or resources or things that I've mentioned in this talk, they're all going to be shared on my notice page afterwards. So I will also tweet that link out on our Cheesley. So if you want to grab any of the resources or any of the information or you just want to connect afterwards, please do ping me on Twitter. So what we're going to be covering today, we're going to be talking a bit about the contribution side of establishing this partners programme. So how do we define what actually constitutes a contribution? How do we track those contributions over time? How do we assign them to organisations to understand who is contributing and who's not contributing within our project? Then the thorny subject of finances came into play. So how do we make the financial aspect of becoming a partner equitable for everyone, wherever they are in the world, so that anyone can become a partner and it's the same kind of value financially for them to do that. And for us, we had the added challenge and we didn't actually have financial autonomy, so we needed to set up some process for us to have transparency over our finances. And then finally, we'll go on to talk a bit about how we actually set this programme up. What does it actually look like? What do partners get when they become a partner? How has it worked for us? So what results have we seen? Did it fulfil the goals that we were hoping it would? And also what have we learned and what things are we going to do differently? Or what have we tweaked as the programme has run its course in the community? So first off, I'll start a little bit of history about Maltic because I'm aware that some of you might not know what we are or how we came to being. So Maltic is an open source marketing automation platform. It was formally launched to the public in 2015 and you can find more about that on Maltic.org, which is our website, and we're also on GitHub as Maltic slash Maltic. When Maltic was founded, there was a corporate software as a service company created as well to provide a hosted environment of Maltic specifically aimed at businesses. That business was acquired by Orchia back in 2019. And since that point, the community has established its own governance model and it's now operating as a self sufficient open source project. Before this point, it was very much dependent on Maltic Inc and now Acquia for pretty much everything in the community from running releases to financing things to can we do this, should we take that direction. So it's been a really big few years for the Maltic community and an awful lot has been achieved in that time. So why did we want to have a partners program to start with? Well, we had a few goals for this program. We wanted to try to encourage more in the way of practical contributions to Maltic, but also financial support for the project and we wanted those both to be consistent. And we looked at lots of other projects and a lot of the time it's just you pay this much money, you become a partner. Well, we actually wanted to tie that status to being actively involved in creating and nurturing this community. So we tied in the requirement for encouraging more practical contributions in our partners program goals. We wanted to help people in our community find those people who are the makers, the people who are actively helping to support the community if they needed help with Maltic. So we really wanted them to support the people who are supporting the project in the community rather than work with people who actually aren't making Maltic better. They're just making a fast buck from the project. We wanted a way to promote organizations who are contributing in a very clear and transparent way. So we wanted to be able to say these are the people who are financially contributing. These are the people who are practically contributing. They're awesome. Go find out more about them, work with them, have conversations with them. And this gave us a really nice way to be able to do that. We could say here are our partners. These are the people we suggest that you work with because they're supporting us as a project in a community. We also wanted to give those organizations who are giving their money, they're giving their time, they're really putting their heart and soul into Maltic, something to be proud of, a status that they can share and brag about in their own circles. And becoming a community partner was something that we felt would help with that. It would give them something to be proud of, something to kind of share. So without further ado, let's jump into the program and what we needed to put in place in order to implement this in the Maltic community. So we'll start with contributions. And there are a few things we had to get kind of really defined before we could move forward with this program. Firstly, we had to decide, well, what actually is a contribution? Now, I came to open source through a non-traditional route. I don't come from a development background. I have a very high level of technical knowledge, but I'm not a developer. I came in through documentation, running user groups and things. So I've always been really keen that we made sure that those contributions were just as valued and recognized as those like creating a pull request, for example. We also needed to identify where those contributions happen. So where are we actually looking to see how something happened that we consider a contribution? And then also, how do we track those out-of-channel contributions? So for example, if someone has run a team meeting or if someone has done a really amazing issue on JIRA, which is not related to code, we wanted to find ways to track those, but also other things that might happen, like someone might speak at a conference and we could consider that as a contribution. We still needed to be able to have a way to credit those people for their contributions. And then how do we associate all of those contributions I just talked about with an organization? So the organization can be ranked and we can consider how much they are contributing, whether that's a level that we consider to be acceptable to join the partners program. So there was quite a lot for us to take on board there. When we started this program, I did quite a lot of research of how can we bring all of this information together? Like the data is there, it's in all of these different places. How can we bring all of that together into one place that lets us get that holistic overview of individuals and of organizations? And in my research, I came across an open source community CRM, which I'd not really come across the term before. So CRM comes from the sales word customer relationship management tool, but this is specifically focused on building and nurturing and growing communities. It's available on GitHub at savanna HQ slash savanna. And it's also a hosted version, which we use because we didn't have the resources to actually manage the infrastructure, but it is an open source tool. And it's really awesome. It allows us to bring together all of our community channels into one central dashboard, which we use to see how things are going in the community. It lets us determine what's a contribution. So some are baked in like GitHub pull requests and a few other things I'm going to talk about later. But we can also define our own custom contributions. We can use the API to push contributions in directly as well. So it's really, really cool. It hit many of our requirements that we had for a tool that we could use for this purpose. And also it lets us identify an organization and then associate individuals with the organization so we can get a profile for what an organization is doing across our entire community. So this was amazing. I started off using it locally myself and then we moved on to having the hosted account once we'd kind of done a proof of concept. And with Savannah, the things we track are GitHub pull requests. We also have a forum, which is very busy on some on their discourse. And in our support category, we have the option for people to mark a reply as a solution to their problem. So we consider those replies which are marked as solutions as a contribution because they've contributed something which is helpful, which has helped someone with a problem. We also use Slack and Slack is our main channel for conversations in our community where we're having sort of like working on projects together or doing team meetings, for example, all happen in Slack. And the way that the Slack integration works is it asks us, is this a contribution? And it's based on, for example, someone saying thank you in response to someone or someone who's Slack, Savannah thinks has provided useful feedback on your product or your community or something happening in your community. So that Slack integration isn't sort of fully automated. We have to then say, yeah, that is a contribution or no, that actually isn't a contribution. Someone's just saying thank you sarcastically, for example. So we do have some control over that one. And then we've also got Meetup and we organize all of our meetups on meetup.com, which is a really great platform. If you haven't used it before, it allows you to organize meetups all over the world. We have an account there and all of our official meetups are managed through that. And so anyone who hosts the meetup, whether they host it on their own or with other people, that's considered a contribution. We also monitor Reddit. So if help is given in a thread, it comes up and we can say, yeah, that's a contribution. Stack Exchange as well, accepted answers to a question. We consider that to be a contribution because you're helping someone out. We have a podcast in our community, which the criteria for acceptance really is that they're fully altruistic. So they're not like a thinly veiled sales pitch. They're purely there to benefit the community and to grow knowledge and awareness of Mortec, for example, within our community or externally. And blogs that are written on Mortec.org because anyone can help with writing a blog on Mortec.org if they're marked as the author that will come up as a contribution as well. But I mentioned that we often have other things that we need to give credit for. And I feel this is also really important. So we have the manual or API assignment and team leads in our community have access to Savannah and they can manually assign credits to a contributor for anything that they feel is relevant to have a contribution credit. So that might be, for example, if someone has led a sprint and they've organized the whole sprint or whatever that might can be considered a contribution. People who are speaking at our conferences, we may well add those as contributions. If you've proofread an article before it goes online, or you get the drift, all those kind of things that there's no kind of data point where we can say, yeah, that was a contribution, but we still want them to be valid contributions in our system and that person to be credited. And there is an API as well. And this allows us to track other activities. We're not fully using it yet, but it's sort of on the radar for us to explore this year. So we use JIRA for tracking tasks that are not code related tasks. And one of the things we want to do is if you're assigned to an issue and you close it for that to be considered as a contribution as well, because you've probably completed a task. And also the person who actually makes GitHub releases, for example, that's something that we're looking at doing. Another thing we're talking with Savannah about is people who review pull requests. So they market as approved or they market as needs changes, because that's also an important part of contributing to a pull request being merged. And so we're bringing all of this data in and then I use this to actually report back to the community on a monthly basis, a quarterly basis and an annual basis. And for individual contributors, we report back on the top contributors and the most active members. And this can get quite competitive. People are quite proud about being in the shout out each month. So it's had like that added benefit of really encouraging people to be like, oh, wow, these are the people who are really helping to make more tick. And also the people who are contributing are a bit like, I want to get in the top 10 because I want to be in the shout out. And in terms of the organizations, it's quite similar. We report on the top contributing organizations and also on the most active. And with organizations, we also do a plus minus compared to the months before. So it shows whether organizations are increasing or decreasing their contributions over time. So here's a quick screenshot that shows you the monthly shout out, which I just mentioned. So this is the one that gives a shout out for organizations. And if there's something unusual in the data like this was December and there were two weeks in the month where most people were not working. So all the contributions bar, I think one company were lower than they were than a month before. So I usually just call that out if there's something that I know is slightly strange in the data. And also we give people information about how to get in this list. So basically how we assign people to two organizations and what contributions are defined as. And the monthly shout out for individuals is very similar. Just we're mentioning people. We don't have the up down arrows. If they don't have a slack profile. So you'll see that just mentioned by like their foreign username, for example, the name that they've used on GitHub because they're not yet on slack. And we also mentioned the number of new contributors we've had in that time period and the number of new members who've been joining the community. And again, because this was December, we had slightly less people joining. It's not really surprising because a lot of people are away from their computers, hopefully anyway, over the festive period. So that's how we dealt with the contributions side and getting a sense of how we can determine how an organization is contributing to the project over time and look at the activity levels. Let's talk about finance now. So this was probably one of the slightly more tricky areas for us to get into place. We needed to figure out first, how do we actually accept money at that time? The only money we had available to us was money from Acrea. And we had to ask every time we needed to spend money, we had to make a business case that had to go through all the formal Acrea processes. It was quite clunky. They're a very big company with a very big financial team. We wanted to have a way to be able to manage this all ourselves. And also we were planning our first ever conference. So we needed to be able to sell tickets and spend that money in a transparent way and have the money come back to the community. We also needed to decide how much is enough for us to consider that someone is contributing financially to the project. So we needed to be able to set a threshold to say, this is the level that you need to be contributing at over this much time, which is the next point, how long do we have that taking place over? And also how do we make that threshold equitable worldwide? Because if we set it at $100 in the US, is a different value to in the UK, is a different value to Nigeria, is of different value to Russia for example. So we needed to find a way to make the amount you had to contribute fair wherever you were in the world. So, fun times. For us financial transparency was really key. I wanted to make sure that everybody could see all of the money coming in, all of the money going out, that there was the ability for us to have several admins who are responsible for approving or rejecting expenses. So again, we did lots of research, I asked lots of other open source communities how they were dealing with this challenge, what were the tools that they were using that worked well for them. We knew it wasn't going to work to have a bank account. Because I'm based in the UK, other members of our team are based all over the world, we've got people in Africa, people in Europe. Banks are just not really set up for that nowadays and the transparency to the community was really important for us. So we ended up setting up a open collective hosted by the open source collective. So effectively act as our bank, they hold and manage our funds, but they also allow us to create projects to raise money for specific projects within our community, to sell tickets for events for example, and also to access some services like HR services if we're working with contractors, training services, so I'm going through a bunch of training courses at the moment to help me be better at leading the community. So it was a really great opportunity for us to actually get started in managing our money. We also applied for GitHub sponsors and GitHub sponsors is tightly integrated with open collective, so it just pushes the money across to our open collective every month. And it allows people to sponsor us in either places, either on open collective or on GitHub sponsors, whatever works best for them. Some prefer GitHub because of the prominence on their company page for example, others prefer open collective for other reasons. So that was us getting started, but then how do we make those thresholds that I talked about equitable? How do we make them fair around the world? Well, my previous life I used to volunteer in the Jumla community and they had this question as well when they were looking at pricing their certifications. And they looked at using a tool called the Big Mac Index. So if you've never come across it, it's something created by the economist. And it's very interesting to read about. You can read the information there, but it basically allows us to determine a relative amount based on various factors for countries around the world. Now it's not perfect. So you'll notice that there's one country right at the beginning here that's going through massive hyperinflation. So obviously it's not going to be very applicable there. But generally by and large, it does actually work really well and it does give us a level that's fair. Or we believe that's fair for everyone around the world, wherever they are. So here you can see the figures that actually the amount that you would have to pay, for example. So if you in Sri Lanka, it would be $62. I think it is my eyesight is not so good. If you in Vietnam, it would be $51. If you're in the US, it's $100 a month. So this is the minimum contribution that you have to pay by country. And the nice thing about this is it allows us to make that fair, but also make it very clear to people. And if the country isn't here, we just try to find out what the big map cost is in their country and we can work out the calculation, or we can do some maths with similar countries. So that's how we get on with that. The next challenge we had was how long. So how long do we actually keep these contributions going before we consider that it's okay to become a partner? And we decided that it should be three months. So three months is the minimum term for having consistent financial contributions and consistent practical contributions to MORTIC. So that's the time period from when they apply. We look back over the last three months to see have they been contributing in both ways. Okay. So that's the financial aspect of this. Well, let's move on to how we actually built the program. So how does it actually work in action and what do the partners get from it? So we're going to cover how do they apply to join as a partner? What do they actually get as a community partner? And also how are we trying to incentivize them? So in terms of applying to become a partner, basically the onus is on the partner. So they can apply when they meet the criteria. They must meet the criteria before we consider them for being a partner. And you can see the criteria here on the side, but basically it is listed as at least three months contributing and we look in Savannah to see the activity for that. At least three months financial contribution at the minimum level and we set it based on the country of the head office or their primary location. We also have tied into this that there are no code of conduct breaches against the organization or team members in the past 12 months. And we also track this in Savannah. We have the option to apply like tags or labels and have notes against individuals and companies. So if there have been any issues, which we haven't had any incidents of this nature, but if there were, then that would affect their partnership status. So what do they get if they become a partner? What do they actually receive? Well, first off, they receive a very prominent listing in our partners directory. Each partner has their own page in the directory and that page includes a backlink to their website. And you can see the listing here of some of our current partners. We put the top three partners, so Acre, Friendly and NoistFood Digital Marketing at the moment. We put them actually on the homepage of Mortic.org in a featured partners block. So the top three are always featured on the homepage with a link to their partners page. So that's a real great thing for them in terms of raising awareness of their brand within the community and for people who are new to the community. We rank this list of partners based on their activity and the contributions in the previous month. So we rejig the order based on their order in the shout-out that we do at the end of the month, basically. And that we also mention them in all of our official conferences. So generally speaking, it's in the keynotes where I mention all of our partners and thank them and so on and so forth. And again, it's another way of being recognized in a formal official way by the project. And the individual page that our partners have allows them to showcase the services that they provide and also the contributions that they make. So you'll see here that in terms of Acre's page, it mentions that one of their contributions is paying me full time to work as project lead. They also have the engineering team, which pretty much did the whole of our Maltic 3 release, which was a massive, massive undertaking. And they're a major contributor to strategic initiatives. We give an overview of the company with all of their contact information, what they do. We highlight the contributors who have been active in the last month. We list them by name. And we also have a graph of their activity over the last quarter in terms of contributions and conversations. So it gives people an idea of actually how active are they in the community at this time. They can list relevant case studies, which goes into our main case studies database. And also there's an opportunity for them to give information about any partners programs that they might have for people who are using Maltic. So for example, if they are using Maltic, but they want to have a partnership with an agency for larger clients or for when they can't service the client themselves, the agencies can all provide their partners information if they have a partners program and encourage people to reach out to them. And there's also a lead generation form or a link to their website if they prefer to have a button to say go here fill in this form, which enables them to get all the information they need in order to put through a sales inquiry effectively. Another feature that we have for our partners is that of roadmap prioritization. So generally speaking, the process in the Maltic community app present for new feature requests is people will put in a feature request on our forum. We have an ideas category in our forum. They'll explain what it is and then the community will have some discussions and people can vote. So they have a specific number of votes, which they can use to vote for features that they want to see in Maltic. And the top features that are voted are the ones that we consider for the roadmap and start to work on. If you're a partner, you have the option of being able to submit up to three features directly to me, the project lead per calendar year. And they can either be funded. So you could say, I want to have a new experience for creating focus items. And here's $5,000 that I have. I just don't have the resources to build it myself or it could be unfunded. So they say we need this feature. It's really important for Maltic. We can't fund it, but we're happy to help with promoting it and trying to get funding or whatever. And those features come directly to me for consideration to be in the roadmap without having to go through that process of community voting and prioritization. So that's a feature that only partners have. It's not available to anyone else. And we also try to really make sure that we promote our partners. We promote them on social media and through email when they actually become a partner, because that's a really great thing to share with our community that we have a new partner who's contributing in that way. We also feature our partners on every community newsletter we send, icons and links to their partners page on the website. And we try as much as we can to reshare relevant news from our partners into our social media channels. So we're promoting awareness of what they are doing within our kind of sphere of influence. We're telling other people what they're doing in their companies. So how has it worked for us? So over the last, so we've had the partners program probably for about a year so far. And over the last year, we've seen a significant uptake in sponsorships from organizations. We've also enrolled six partners and we have one more partner in review at the moment. So by the end of this quarter, we should have seven active partners in our partners program. We've seen sustained contributions from partners. So we've only had, I think, one month where one partner fell off the radar slightly with their contributions, mainly because of staff sickness with COVID. But generally speaking, every partner has maintained a consistent level of contribution practically and financially. And in fact, we've seen some partners really accelerate their contributions so that they can get up the rankings and be featured more highly. That's not the only reason. Obviously, they love contributing and they see the value. But you know, that is part of the incentive as well. Interestingly, we've seen new contributions from people who want to be partners, but they're not actively contributing in the community and they want to know how. So this has been everything from like pull requests to people helping write documentation to people helping with running events, all kinds of places in the community. We've had people stepping up and say, can I help with this? Can I help with that? Ultimately, because they need to be able to show that they are contributing in the community, and we've enabled community members to build relationships with those makers in the community. So it's driving mutually beneficial growth in the ecosystem. If those people and those organizations succeed and grow and thrive, it means they'll have more capacity to be able to help the MOTIC project succeed and grow and thrive. So from our perspective, it's a no-brainer to really try and grow those organizations because they see the value and we get the value in our community. One of the things we really learned, or I really learned, I should say is like transparency really matters. So it's really important that if you're implementing this kind of program in your community, that you have very clear policies and workflows which cover the whole process from what the criteria is to become a partner, what you have to do in order to apply how the application is reviewed, who reviews it, what the workflows are, how you promote them, the whole process. It just makes it much less open to concern or criticism if you have it all clearly documented. And I've shared on my notice page links to some of our policies in the MOTIC community in case you want to take a look at some of the policies that we've written. I also think it's really important that you're clear on the expectations that partners have in terms of your promotion of them. So how you're actually going to promote partners once they've signed up because they may expect more of you than you're actually able to deliver and that can lead to some bumpy conversations. So be really clear how much is in your capacity to actually do for your partners and how much they should expect in that regard. I also think it's really important to set very clear guidelines on what they can list on their partners page. In fact, for our perspective, it has to be reviewed. So they have to provide the content to us. We review it and we actually upload it to the page. Because we don't want those partners pages turning into like a spam fest. They need to be useful and relevant and not just like buy me, sell me, that kind of thing. So be really clear how many backlinks you're willing to allow, what information and how many words, that kind of information they're allowed to provide on those pages. So then coming to the end, so I thought I would share some thoughts of what would we change if we were doing this again or what are we changing in this process. So by far and away from me, one of the biggest things is that I have to update those pages every month. So I have to update the images with the activity, have to update the names of the people who are active and the order. It would be amazing to be able to automate that or embed the images and the resources and the information on the portal. That is sort of limited by what Savannah has available and it's a discussion that I'm having with the person who maintains that project. But for me, it's something that was a compromise that I was willing to take, but as we scale the program it will become more and more difficult and take longer to actually do that each month. So we definitely feel like we need to, as we're scaling, we need to improve the capabilities on our portal, our partner's portal, so that people can actually search and filter for providers who have a SaaS solution, providers who support you hosting your own Mortic solution, people who speak German, people who are based in the Middle East, that kind of thing. At the moment we don't have that capability and so it's not so easy for people to find who they're looking for. So that was purely a factor of needing to get the MVP up and running very quickly. So it's all built with just basic static pages in Drupal with the layout builder. So ultimately what we want to do is actually build a portal or find a plugin or an extension that will allow us to do that more effectively. So that way people can find the partners that they want to work with. I'd also like to find a way to automate that following up process if partners are declining in their activity or like send me an email to say this partner has got a 50% reduction or something like that. A lot of the stats and everything are manual at the moment, so again it's a little bit time consuming and it relies on people following up on that. We've been having discussions in our leadership team about the potential of having a tiered program. So at the moment it's just you're a partner or you're not a partner. And we've been talking about in the future we might consider having different levels probably based on financial and practical contributions. We haven't really fleshed it all out yet, but that's something that we're considering. We've been for this very basic one level no tier system to get it up and running quickly and easily with the minimum amount of fuss. But we do feel like some of our partners are doing a lot more than others. So we're thinking about how we can recognize that basically. And also we're looking at ways we can help them and we can help the community. So some of the things we're exploring there are things like sponsored content. So allowing our partners to write content for our blog, which is relevant and useful for our audience, but that is a sponsored content piece. So it's talking about perhaps their platform or their tools or whatever, but it's very clearly marked as sponsored content. So that's another thing that we're trying to think how can we give more value to our partners, but at the same time be getting value in our community and not just be a spam fest of like buy my services. It needs to be something that's relevant for the audience as well as giving value to our partners. So that's me done. I am happy to take any questions. If you want to email me rather than discuss in the chat, you can contact me at rift.cheeseatmortic.org. As I mentioned at the start, I'm going to upload all of my resources onto my notice page, which is notenotei.st forward slash rcheasley. So the slides will be there. The recording with subtitles will be there. The links that I've mentioned and also some other useful resources like the policies that we've put in place and other things that I've found useful in this process are all going to be up on my notice page. I will tweet the link out as well, so you should be able to access that on Twitter. But yeah, any questions, I'm here to answer, so feel free to fire away. Thank you for your time. So far, there is only my own question. So all listeners, please do post your questions for this talk, if you like. So the question is, have you encountered the situation when the partner has fulfilled the written requirements? But that was not what your organization expected, so we had to update the rules and refuse to give away the reward. Have you had any stories like that, which you'd like to share? Yeah, no. I mean, we haven't had anyone misbehaving or gaming the system to try to become a partner. And actually, we review all the applications within the leadership team. So we look at the contributions, how consistent it has been, what are those contributions. So we don't want like 100 updates to read me files to count as, you know, towards a partner's program, because although it's a contribution, sometimes that can be used to sort of gain the system. So no, we haven't really had anyone try to game it yet, but it's definitely something that we will have to be aware of. One would hope that people would behave themselves and not do that, but you never know. Yes, it would also come to a kind of conduct if it was being really irresponsible at some point. Any other questions from the room? There was some chatter about different tools you can use to track community metrics and there are other tools out there that you can use to do what we use Savannah for. It's not like the only tool that's available. I definitely recommend people have a look and see what the different tools do. Quite a lot of them do free trial so you can see if it pulls in the information and gives you the metrics that you need. And then, yeah, go from there really for us, it's been really helpful because it ticked all the boxes we needed. So, and there was also some questions about code of conduct, like tying in the code of conduct breach to being a partner. And I felt that was really important for the reasons that someone mentioned like sometimes you can have people who are partners who are just very toxic. So, I wanted to have that in our policy that that was actually part of the requirements really for a partner to be in good standing. I think that's what we say in good standing without any active code of conduct issues. Cool. We have two new questions right now. How do you handle inactivity of prominent people in the community that may have disappeared and how do you denote them in a fair and transparent way? Yeah. So, thanks for talking about individuals or organizations. Individual wise, I get notified if someone is going inactive in Savannah, so that helps me to notice if someone isn't around as much because you can't be all seeing I and all the time, you just can't. So that's very helpful. It also tells me if someone comes back from being inactive. So quite often I will go on to those postal threads and say, hey, welcome back, great to have you back. We've had some people come back after like five years of inactivity. You know, so it's quite nice to recognize that. In terms of the organizations, if organizations start to go off, that will come up in the monthly shout outs. They'll start to drop through the monthly shout outs. But at the moment, pretty much, we know that that's happening because we're a relatively small company, relatively small organization. And so like in any small organization, if the big players start to reduce their engagement, you notice things start to not get done. So the process that I've had really is if an organization, because I mentioned one started to drop off due to COVID related star illness, if an organization starts to drop their contributions, I just reach out to the people from that organization and just say, hey, what's going on? Is there a problem? We just noticed that you're not as active. And usually they're just like, oh, someone's been off sick or we've had a staff change and the new people aren't up to date with the open source stuff and we're getting that in progress. We generally would give them like a couple of months to get back up to those levels. But if it continues for a quarter, then we would look at saying, right, well, we need you to get back up to those contribution levels to be listed on the partners page. So we do give people a good amount of time if there are internal stuff, things going on that's causing them problems with their contributions. And the challenge is doing that fairly and transparently because you don't need the whole world to know what's going on in that business at that time. So it's just communicating skillfully, I guess. Okay, so hopefully that's answered your question, Ashley. How do you manage the pipeline prior to showing Montsevara? Do you use other tools for future partners? So as soon as someone chaps up anywhere in our community, they're on Savannah. So as soon as they say anything or respond to anything, they're on Savannah. We can't do much about the people who are not ever active. Those people are just like we usually try to make sure that we are continuously reaching out to people or continuously showing them how they can help how they can contribute and the value of that as well. So we've also dramatically improved our onboarding docs. They're still not perfect, but a lot better than what we used to have to make it very easy for people to make those contributions. And in terms of future partners, a lot of that is kind of like conversations with people. It's building relationships with people. It's having them understand what the value is of being a partner in the community. A lot of them, I heard in one of the sessions yesterday where someone was saying like they use this software and it's critical to their business. If it went away, it would cause a big problem for them. And for a lot of our sponsors, that is why they sponsor the project, because they really like the product. They use it a lot. They use it for their customers. They want to support its continuation. And the partners are like a step up from that because they're playing a part in actually doing that. So initially there was a lot of outreach to people who are in those organizations. We're looking at setting up this program. These are the boundaries. This is what we're expecting. What do you think? And pretty much everyone was like, awesome. We're there. We'll make sure that we've got our contributions up to Udyspect to be able to join. And nowadays it's more a case of when I see companies coming up in the shiteite, coming up a few months in a row, usually if they're not yet a sponsor, I'll have a conversation with them and just reach out to them and see if they're interested. And make sure they're aware of what it is and what the requirements are and how to become a partner. And that's helped in a few cases. There are plenty of time, 12 more minutes, but there are no new questions. Unless anybody starts typing right now, we can finish the questions section early. Yeah, okay. Thank you very much, Ruth. Thank you to all the listeners. That was a great talk. Okay, great.
While money is helpful in open source projects, hands-on contributions are probably more valuable to the long term health and sustainability of the project. In the Mautic project, we wanted to establish a partners programme which would allow us to highlight to our community the organisations who were both financially supporting the project as a sponsor, and were actively contributing to the project. Here's how we did it. In this session I’ll outline how we came up with a way to make the financial element equitable for partners around the world, and the steps we took toward ensuring that organisations couldn’t just buy their way to partner status. We’ll dig into thorny topics like determining what we mean by contributions, how we recognise non-code contributions alongside code contributions, and the tooling that underpins it all. I’ll also explain how we’ve built our partners portal to incentivise active contributions from the organisations, and some of the improvements that we’re thinking of making in the future.
10.5446/57017 (DOI)
Hello everyone. Today we are going to talk about state of open source databases. And I wanted to cover first the topic of innovation, what is happening in open source database space and how it is truly in parallel in those days. I remember when I started in this space sometime in early 2000s, databases were kind of considered boring and we had a bunch of dominating relational databases and there was not a lot really happening in the industry at that time. Well, situation is completely changed now. And what I wanted to cover is the most interesting innovation which I see happening in those days. The first one is I think the focus on the distributed databases, which I think is very much connected to the cloud computing there, idea of just scaling up is not really feasible anymore. And we see both the older open source databases such as MySQL or Postgres are adapting for this new distributed world as well as some of the new generation databases which are designed from a ground up coming up such as Yugabit or TIDB which are really built to be distributed but at the same time they still support a relational, modern and SQL level. The second interesting idea which is so far most applied to the analytical database is the separation of storage and compute. The idea in this case is what we should be able to scale our storage independently from the compute resources which require to process the data. And what that allows is to use something relatively inexpensive and easily scalable such as object store S3 and compatible to store the data and then to have a more on demand compute which can be scaled up and down rapidly depending on the processor needs. You can see a lot of analytical databases in particular are built using those days and there are some ideas about how to do, how to apply this method for transactional database as well, though I would say this problem is not completely solved yet. The next important idea for open source database and other database as well is a serverless database where the idea is what you as a user should not be thinking about database in terms of how many compute nodes or storage nodes you deploy but more sort of on the outcome you want to get. How many transactions a second do you want to be able to operate or how fast queries you want to run and think about the system in those concept rather than servers and their database vendor would scale those for you up and down automatically. This is a very attractive model and that can be applied both to analytical and transactional databases though usually it is in the cloud because if you as a user want to operate things even in serverless concept well you still have to think about provision of the servers or something at least at this point. We also have a lot of innovation in the database models. There previously we had just relational databases dominated well and frankly even now majority of mission critical data is stored in relational databases. We have a lot of interest in innovation and purpose built databases. I am particularly very interested in time series databases which has been growing very rapidly especially in the more needs of observability, internet of things and so on which generate a lot and a lot of time series data which needs to be stored and processed very effectively. Graph databases that is another set of data store and problems which do not really fit very well on relational database model and language which have been growing rapidly and also the third one which is interesting for me is the database providing the set of data structures for your application developer to use. RAGIS is of course the most well-known database which provide that concept though there are others as well. Another interesting development I think is the idea of a multi-modal where you can see some databases can support multiple data models rather than one and some of them even may be able to talk different languages and protocol on their own. Some of the more interesting ideas of those shape changers could be click house which additional to the click house native protocol can speak PostgreSQL and MySQL or via protocol. Victoria Metric which is a time series database has in flux DB and graphite API comparability. The third DB and this is the project I helped to co-found recently allows to use PostgreSQL as a back end while talking to that through MongoDB protocol and then another development which has been a very exciting for PostgreSQL ecosystem is release of Babelfish by Amazon web services which provides Microsoft SQL Server comparability. So you can see a lot is going on in this case. So innovation has been great but what is about market evolution right? What is the biggest factor which affects open source in general and open source database in particular? As I asked people about that many tell me what that is a cloud and I think they are right. Cloud is really impacting the database very much. One of the interesting illustration coming just recently was what the DB engines call the snowflake as a database of a year because snowflake was the fastest growing database according to the methodology and you can see what this has been quite a few years since not open source database took this place the fastest growing database by this method and I think that really is a very important lesson to everybody in the open source ecosystem in terms of what we really need to be thinking about the business side and usability of open source if you really want open source to be continue dominating factor. But snowflake aside what kind of cloud impact are we really seeing? On one side we see what cloud really helps open source database technologies to simplify and maximize adoption. On the other hand it turns to changes opportunities for monetization. What that means is it changes who is actually going to make money on that open source database technology. In this case I think the fitting world of this very wise man Martin Micas came handy what the open source is not a business model. The open sources development model distribution model but it is not the business model on itself and so as things progress with the cloud you have different models which now are needed for open source databases which is not pleasant by some of the vendors. For example cloud really allows the hijacking of a GPL. The companies which have been relying on a GPL or dual license to make sure what only they are able kind of to commercialize their database technology such as MySQL well that doesn't work in the cloud anymore. If you think about MySQL in particular while Oracle is a copyright holder I would imagine what the Amazon with Amazon RDS and Aurora is making more money on MySQL than Oracle itself and they can do that even if the MySQL is GPL software because they're not distributing commercially modified version they're just hosting it for their cloud. This is I think there the governments really impact a lot how different vendors are seen impact of a cloud. And two different governance models for open source again on a very high level could be even foundation driven when multiple vendors came together in some sort of non-profit driving project forward or single vendor. If you look at the foundation based open source an example in open source databases of course first and foremost would be Posters. They really are very excited about how impact how cloud impacts their technology because cloud really helps to accelerate adoption. The fact lots and lots of cloud vendors have Posgrace SQL accessible that is one of the reasons Posgrace SQL has been getting so much traction recently. And yes of course it changes who captures the value there a lot of value and money driven by Posgrace SQL are going to the cloud vendors those days but Posgrace SQL community at large are generally okay with that. If you look at the single vendor example which would be somebody like MongoDB or Elasticsearch and some others those tend to be venture funded or in some cases now funded public companies which really are very uneasy about competition with the cloud vendors because they are focused on making sure they are the entity which captures most of the value created by that open source project. And what that has been causing recently is many of them has been fully or partly abandoning of open source licenses and adopting some sort of source available license which doesn't give you over freedom of the open source licenses. Now what is the primary reason of the license change if you ask me well it is actually creating monopoly in their database as a service the slide market right because that is the market which is the most valued those days. Now you would ask why is that database as a service market matters well it matters because the database as a service gives you really state of art simplicity. Very you as developer using database can deploy the database in API call and a couple of clicks and have a database which patches itself, back as itself, recovers itself from failures and so on and so forth allowing high level of automation and allow application developers to focus on the application instead of on all this complexity of operating database at scale. Now you may wonder well what is a problem of monopoly with database as a service which was cloud vendors once. Now if you just can run your own database isn't that good enough and I would argue that is not really enough because in practice that makes situation not much different from a proprietary software because the using database as a service is very different skill compared to rolling out your own database database setup and it is just not practical for many companies to migrate from vendor locking right to implementing their own situation. It is the same as I don't know somebody like Oracle would say well you know what you don't need to buy our database I mean what's the problem you can just go ahead and write your own well developing their database from scratch and and developing application with that database the different skill the same applies to the database as a service. Though I think not all is lost where it comes to their open source and databases. For me it all reminds me situation which we have seen in the early 2000s and I remember I was involved in the early open source development with you know tools like PHP, Perl and so on and so forth and a lot of that was not super efficient. There were a fair amount of friction to do that and there you could look at something like Microsoft ASP.net with a lot of well integrated tools and and so on and so forth. Now we can compare that to in 2020s to Amazon so other cloud stack where we are getting a lot of those very polished and well integrated services compared to the open source which may not be as mature enough. But what is interesting here the same situation happened in open source in with operating systems a couple of decades ago. I remember the times of Linux 2.0 or something like that which could not handle files more than 2 gigabytes in size or had issues with supporting many CPU cores, didn't have transactional file system and so on and so forth. And many of folks I knew who would be running Solaris at that time were just saying well Linux is a toy and it's not really useful for any real workloads. Well guess what we have seen the history and we know now market situation of where Solaris is and their Intel and their Linux is. And I think when we look at the cloud we are should be expecting the open source to catch up again and that is where cloud native computing foundation is really doing a lot of great work. And if you see at cloud computing foundation there is actually much more choice in the software which is available compared to what AWS or any cloud vendor can give you. It is much more organic it's often maybe not as mature or not as integrated but you are having a lot of choice and it is improving and maturing rapidly. So I would encourage all of you in this case is to look at this and see how we can really have an open source way to run the cloud to make it a reality. Now when you think about the cloud and cloud computing I think what it deserves it's originally intended role as commodity infrastructure provider. This is a picture I am showing now is something which I picked up from the early AWS advertising materials where they have been talking about cloud as something akin to electricity. But remember what electricity is electricity is commodity you can get electricity from any vendor or you can run your own generator and electricity looks all the same. You are not forced to change your TV or a fridge just because you are changing your infrastructure provider. But that exactly is somewhere where cloud vendors push you and that is especially important when it comes to the databases because once you put a lot of data in the database it is very hard to get it out and move it to something else. That is exactly why you would see a lot of proprietary databases are getting such good evaluation because well people who invest in that they understand what databases are you know people adopting commercial database are stuck with them for very long time. Okay so do we have open source database as a service at this point? That would be wonderful isn't it? Because especially as I described how important database as a service is and that is what offers the maximum value to the developers. And I would say what unfortunately not yet but we as a community is taking our steps to making that happen. And let's look at what those steps are and what are the components which empower that. The first component to consider is the Kubernetes of course. The Kubernetes you can think about this as an operating system for data center same as Linux came to exist as an operating system for single single computer decades ago. And the Kubernetes is pretty much ubiquitous. It is available in all the public clouds. It's available from many private infrastructure providers such as VMware. It is Android Hats. It's available on the edge. And that is a fantastic target for us to develop the applications which we want to run anywhere but which employ this new development pattern. Specifically for open source databases Kubernetes now developed the technology called operators which are something which allows us to run complicated stateful applications on Kubernetes. On one extent operators really solve some of the problems or the database as a service solve. We really allow a lot of day one and day two automation. Both things like provisioning as well as patching backups, high availability and so on and so forth. But at the same time UX for operators is not the same as database as a service. You kind of need to know how Kubernetes operates and be familiar with Kubernetes concept. You can not just really deploy operator in two clicks, right? A simple API call without understanding Kubernetes concept. But at the same time Kubernetes can serve as fantastic building block for database as a service. If you want database as a service with similar functionality to what we're seeing in the cloud. And this is not just theoretical. If you look at a lot of modern the database as a service is built on the public clouds, they are built using Kubernetes as a building block. A lot of them are and in many cases you would not know unless you use their materials because they look pretty much same as other databases as a service. You have an API or GUI, boom, you deploy the database and it works. But internally it's taken care by operators. And for me, because we have a tens of thousands of database clusters running on Kubernetes in production, that shows me what the Kubernetes now is really ready to handle mission critical stateful applications if you really implement that correctly. So what is going on with completely open source databases as a service? Well, that is actually something that we are working on at Percona as a feature in our software, Percona monitoring management and our work on the operators for open source databases is support. We don't expect to be the only vendor who does that and frankly we don't want to be one, but we are hoping to push the boundaries in this direction. And right now what we have is still work in progress. It is it is sort of a preview, but I would recommend you to check out, give us your feedback or maybe even submit some code so we can really see what is possible with Kubernetes and open source databases. One question I often get in this case is question of sustainability. If we can really have their open source database as a service while having the sustainable open source business. And I believe what that is possible and more than I believe what that is possible, that is something what we have been doing at Percona for the last 15 years. All software which we release to the customers at Percona is open source. We don't do open core or source available stuff and we have been able to do that and grow for the last 15 years. I remember folks have been telling to us when we were at 10 people company, that's never going to scale and then 50 people company, well yes, maybe you can do it for going from 10 to 50, but you never scale that to more than that. Now we are about 300 people worldwide and while we are obviously not as large as many proprietary software vendors, I think what that proves what you can get fully open source database solutions to at least certain scale. And I would imagine very one of you to invest your time and effort on building open source better as we have done. Thank you. That's all I have. Okay, I think we are live with the Q&A session. Thank you for presenting this talk to us. And now I think some questions from the talk. And the question was, for example, all the databases seem to run on Kubernetes. What about databases on the edge? Is there still the need for that or what can we do about this? Oh, yes, of course. And I think these are topics which are very well connected. What we see in with the edge is kind of common in different shapes now. There is a larger and smaller installations. On small installations there is a lot of databases which can pretty well on the edge applications. For a larger one, you can actually have Kubernetes on the edge with databases, Ryan Kubernetes, which works fantastic, especially because Kubernetes allows this kind of automatic serviceability, which is important for many edge applications where stuff access may be limited. Cool. Thanks a lot for sharing this. And I think we are almost at the end of the session. Thank you.
It has been an exciting year in the open-source database industry, with more choice, more cloud, and key changes in the industry. We will dive into the key developments over 2021, including the most important open-source database software releases in general, the significance of cloud-native solutions in a multi-vendor multi-cloud world, the new criticality of security challenges, and the evolution of the open-source software industry.
10.5446/57020 (DOI)
Hello everybody and thank you for being here in this talk in which I would talk about new features in the ADA 2022 revision of the language that lead to more natural looking initialization of data. So normally I would ask you some questions if this were a live presentation, so you are going to have to play in your head, but basically is this a valid ADA initialization and okay there's a trick but we can agree that this is a valid initialization as long as we are using strings of the same length because in ADA, array elements have to have a definite size and also the same size for every element. So this is something that beginners struggle with which is that you cannot have this because your strings are not of the same length and the compiler will not accept this kind of array declaration in which the string is unconstrained. So this leads to the typical workaround which is abusing the class operator for this kind of conversion in which we use another type which is definite as the array element and the conversion is done as I said using this operator. This is normally seen in ADA so it's already part of the culture of the language but it's ugly at least in my opinion, I think nobody likes to do this in purpose. So let's continue with our suppositions and again if I ask you if this is valid ADA, here we have a new string type that is inside the package I'm using for this experiment with this JSON as a joke of the Spanish transliteration of the JSON acronym and okay you can say if I define my string like this or as an array of characters okay this is valid but what if I ask you if this can be done for an irregular private type and here depending on your knowledge of ADA 2022 you will tell me no this is not valid ADA because it wasn't but it's going to be valid and for the record my experiments are done with GNAT 11.2.3 and with this we can achieve this kind of initialization for private types so in turn this means that we can have an array of this type that can be initialized which is private so it's definite although we don't yet know how to do this and I will show you shortly but what if I told you that this can be achieved not for arrays but also for any kind of type also a private type and to achieve this we need a few new features of the language which are coming the forms of aspects and the first one is that for any type you can specify this string literal aspect which in turn binds a function with the initialization in the sense that you will get a call to the specified function with a wide wide string that is any unicode string and then and there you can convert it to your to your type internal information and what about arrays so there is also this new aspect aggregate which has several sub aspects in which for example you can define an append function which will add elements to your container so with these two new aspects indeed you can have the kind of initialization that we were seeing which is this one the string literal takes care of the strings inside the array and the second aspect the aggregate takes care of initializing an array array like data structure is there something similar for maps in which we have keys and values well there it is and instead of the aspect add unnamed we have the aspect add named and here the function has to have this prototype and well with this we are all set to do that and this leads us to the question which is behind this presentation so there are several text formats to represent information for example json with this JavaScript object notation which I must say I think it's not intended for human consumption it doesn't allow commands for example and it's a structure data description well structure to some extent because we can have for example heterogeneous arrays another equivalent representation this time intended for humans is jammer that is jammer and a markup language so we have there an in-joke recursive acronym and we can represent essentially the same information that is different kinds of numbers booleans booleans nested objects and so on then we have tumble which is tom obvious markup language which I must tell you is not that obvious once you get deep nesting in the data structure and as you can see the differences are minimal there are some quotes quotes around that in some cases are necessary in some cases are not anyway the thing is that this got me thinking if I wanted to represent this information in Ada in a natural looking way could I do that with code that looks like this for example so as you can see here you have maps we have vectors we have mixed types oh no and this is the question I'm going to try to answer in this presentation can we achieve this kind of code in new Ada and well just like we saw for initializing strings the first thing that we can clear is that we can do the same for numbers and I think a part of the reasons behind this is that the new big number library which is a standard package in Ada 2022 needs something like this so variables can be initialized without using explicit conversions and just like we have the conversion from a unique code string here a string is enough to represent all digits so that's the difference in the prototype of this function which also is good because it breaks an ambiguity here so my first idea when I knew about these aspects and started to play with Vimis is can I do this so can I have different initializations for the same type with different literals and surprisingly yes this is accepted by the compiler so now we can write this is it a good idea I'm not going to enter into that but basically I can initialize with any of those three expressions and it flies by the compiler so that's like that's a step in the direction we want to to go and the obvious idea next is okay if I can use different initializations for a same type what about different aggregates for the same type so we would have something like this and here the bad news begin because this is rejected by the compiler with a quite precise message pointing me to the reference manual which is something that GINAT doesn't do always so it's like saying okay now you are very wrong and the reason is here in the reference manual so so shame on you and actually if you go to the manual indeed it says that you cannot mix that kind of aspects why well it seems logical not to mix vector like and map like initialization but in this case I don't know why limit me about this I will discuss this a bit later so in conclusion this means that I cannot have just one type for the kind of initialization I'm trying to do at least I have to split the vector and map initialization in two types so with this information I did a first attempt in which we have a basic type or the core type in which we have all the literal initializations and also for example the map initialization and then we have an axi-layer type for the vector initializations and a conversion function that returns the core type from the axi-layer type and with this we are really near our objective because we can have this kind of code which we can have better genius vectors and the only sacrifice is that we have to make explicit that this is a vector and also that those two parenthesis are not optional because in Eda there is a rule that you can omit double parenthesis where there is no ambiguity but he here that would mean that instead of one parameter for the function there are three so this is ambiguous so here you cannot remove that and in the end I didn't like this solution because it has an asymmetry between vectors and maps which is kind of ugly of course you could force map initialization with another axi-layer type still you would have the problem of the two parenthesis anyway so I decided to try to continue looking for another solution so I have another idea which is instead of having a single type can we have a single class I mean a single family of types which belong to the same class and so we have the base class we derive from here for maps with their own initialization vectors with their own initialization as we know and since we are doing this for vectors and maps we can have own types for all the rest of the data types around so we have a bit more of a type checking which is also nicer than what we had just before and well apparently this is working well because we can have initializations for each kind of scalar type if we can call strings scalar in this context a vector with mixed integers and strings but what if instead of using variables I use literals well it turns out that for some reason the user defined literal isn't being it's kicking here to convert the literals so okay this doesn't work I'm not sure if it should work but at present it doesn't work and the thing is that you can give a hint to the compiler and it will properly convert those those literals so I don't know do views well when things like that happen with new features I just try to move things around a bit to work around the compiler and so it is said well let's try to move the literal initializations to the base type which is the first one that the compiler maybe will attempt to to convert to and surprisingly or not this works and now we can have again this kind of mixed vectors maps maps that contain vectors and so next attempt was okay I can use a vector variable but what if I use a vector literal inside another vector or another map and sadly again we hit an error which is that the aggregate cannot be class-wide and this is not anywhere I have seen this before for example in functions that return a class-wide type but I'm looking a bit more in detail what's happening here is a bit contrary intuitive because we know that the only type that can be that expression on top is a vector so it doesn't matter that the placeholder is class-wide because we are using an expression that can only be a vector but I think the compiler is right here because in Ada having a new width cannot change compilation so although right now that's the only possible initialization for that vector maybe in the future we would have other derived types in the same family class that would be really ambiguous about this so right now I'm convinced that this is actually not proper I would like to discuss this with with a language lawyer but anyway again we see that with a hint this will work and as I said I'd like to discuss about this because maybe in this case this should work without ambiguity but anyway things as they are now is that you cannot do that so where do those leave us well I have not been able to solve this this conundrum so right now the best I have is this kind of initialization where you have to make explicit nested initializations for aggregates if they are vectors or maps although for us it's clear that there cannot be ambiguity at least we can get rid in this case right of the double parenthesis that we have before so this would be the current result how is this achieve here we have a summary of all the aspects that are coming together to achieve this there is a exception for booleans there is no support for this kind of initialization but you can get around it because false and true are not reserved words so you can reuse them that's okay and well you see how everything works together to that problem of the aggregate that cannot be class-wide I can see two solutions at the moment one is to discuss if that restriction is really needed because well it's in your hand to make your maps proper maps or proper vectors but maybe in this kind of situation you want to have both initializations and if that is if there's some alternate motive for that decision okay another possibility would be that the aspect could be overloaded this is already done for example for indexing so we can index a container with a numeric index a key for example in a map and as we will see indeed with any other type for example a cursor but here I will leverage this to use paths I will see in a moment but the important thing right now is that this constant indexing with a function name actually refers to three different functions while for the initialization for the append if you try to do this the compiler will silently ignore the all but the last matching function and since in Eda ignoring things silently is not normal I hopeful that this might be an omission that will be working in the future or at least we would get an error that one of those is not being that you have conflicting functions for the aspect because as I say right now only this exists for the kind of aggregate initializations we are attempting now that I have brought up the topic of indexing what if I define this nested vector inside a map and I ask you what's at the position one key second index so it seems clear that the answer should be first them and and by using the constant indexing aspect indeed this works out of the box this returns any class that can be indexed again and internally of course we are checking types so this is safe at runtime but I also thought okay if I have a sequence of indexes indeed I can represent those indexes in a vector and this vector will work for this indexing I mean this opened the possibility of using building indexes dynamically but what happens I try to do that directly in an expression with literals and then this fails there's a catch 22 here I'm not going to go into too much detail but if you have a prototype with accepts and a string that string could be a vector of one element but if you remove that prototype with an string it won't recognize it as a vector with one element so you cannot do both or even if you remove one of the confliction functions so again you can give a hint to the compiler or the solution alternative solution is to again leverage an operator to build an ambiguous expression and this is I have seen this in other libraries I have done this in my own libraries when discussing when using for example file system path which is used the forward slash to denote a path inside a structure so this I think it's a nice solution I don't feel that this is so abusing as the plus operator at this point you may be wondering why I spend so much energy in this kind of trivial thing and well I admit I like to push new features to the limit so why not and the other reason is that well it comes from a layer I in the beginning a layer used a data code to represent releases so in essence I'd like to have a data structure that can be both compiled modified with a compiler or parsed at runtime and I think this could be useful for tools that are to be used by Ada programmers this is not still in the library of course parsing a subset of Ada might be tricky we'll see if this arrives anywhere in conclusion I think that users define literals strict a good balance between runaway implicit conversions on or using the ugly operator all the new aspects come together nicely there is this problem with the aggregate initialization ambiguity we'll see if they're one of the solutions that I discussed making two into the compiler that's all I hope to find the talk interesting and I'm happy to discuss anything with you you we have one question from Jeremy Gosse is there support for JSON schema well no right right now the library is just bare bones the experiments I I have just presented so there is no loading of JSON or any of the other formats although it's an obvious feature that I'd like to have in the future being able to export the same daytime initializing this way to be able to export it to JSON jamal or toml or whatever or well in general that that kind of text representations I had another question personally that I thought about while listening to the talk these aggregates can they be used during elaboration of the program or do they have to wait until after elaboration no I don't think there is any restriction it's just a normal initialization and in fact in in the code in my library most of the initializations that I used to test these new features are performed during the elaboration of the main executable all right we have another question now just a right what other 2022 features are you looking forward let me think a bit well obviously the I was looking forward to the parallelism features the parallel loops and this feature I don't know how to how what's the official name but where you have like two or three blocks of code and you execute them in parallel like do this and this and this and this and the compiler will create for you three short live tasks that don't interact but it's a nice way to to split work into several cores of the CPU I think that the parallelism things are the ones that I was most looking forward to although I have heard that they are not going to be in D NAT at least in the short term and I also like the reduction of containers when you have a collection of elements and you can apply a function to reduce the container to a result those are the features that I the top of my mind I'm looking forward to that's not as interesting but it goes back to the question of elaboration I'm not sure if tasking will be available during elaboration maybe that's a question for later we have another one another question from Fernando are these techniques the ones showed during the presentation currently in use for the ADA TOML package well this package is from I'm not the author and I don't know I don't think so I guess the right now precisely for that package the author would want to have maximum compatibility not to restrict himself to ADA 22 okay maybe the author of ADA TOML I have the name in my head but since I'm trying to answer without delaying I can't it is difficult to answer questions no no let me then we have a very interesting remark from Boyan Petrovic which is not a question I feel that as more aspects like these accumulate in the language standard the more background knowledge is needed for a programmer to understand what is actually happening behind nice syntax at some point my head's going to have no free space yeah I have to comment but before the author of ADA TOML is Pierre Marie the the Rola sorry about about the mental skip and about this yes well the thing I don't like too much about aspects is that when you use a type which has aspects as a formal for a generic you are whipping out the aspects because the aspects don't appear in the generic parameters and so I feel that in some way aspects and generics are not 100% well integrated but this is probably a minor it doesn't come often as a any kind of problem but it's true that we have more and more aspects every time and yes it's really I think that's what you said at the end of your presentation you have to strike a very fine balance between easy for the reader and easy for the writer basically yeah that's true and in some cases the sorry go ahead when you when you write a very complex data structure with nested components or whatnot using all of these aspects it looks like adjacent data structure so it's very legible for the human but then a programmer human looking at this might scrap their head asking themselves what's happening and how all of these get initialized and maybe we'll want to attach a debugger in order to understand how these are executed because it's really not abuse yeah the thing is that for some aspects I think they are very simple to understand for example those aspects to to make initialization for a literal that's pretty straightforward but for example the aspects for iterators are really hard to I mean every time I try to use them I have to go to the basic samples and relearn them every time for clients is not a problem because they just use the aspects and that's okay sorry not the aspects the the data types transparently but for writers is really not so obvious sometimes right again it's a matter of balance historically Ada has struck the balance towards the reader yeah and the thing is that for example with this uh integer literal you can write a universal integer and it will recognize it and use it for the initialization but you cannot use any other variable of an integer type so there's that's the limit there 10 seconds um if anyone wants to join us in the in the back room they can now
Ada 2022 is around the corner with many goodies in the form of new features and featurettes. Arguably small syntax sugar additions combine for the programmer's comfort, like for example user-defined literals and container aggregates, that allow natural initialization of user-defined containers with the same expressions used for basic arrays since the beginnings of Ada. In this talk, I discuss how these features allow the initialization of a container data type for heterogeneous values (a-la JSON) without the need of crutch functions (like the usual "+"). Such an structure could be used for compiled-in definitions but, more ambitiously, a relatively simple parser for a strict subset of Ada could leverage this data structure for natural-looking (to the Ada programmer) configuration files. Such configuration files could be useful for tools that are strictly Ada-oriented, written by and for Ada programmers.
10.5446/57024 (DOI)
Well, this marks the end of this year's Ada Debroum. In these five minutes that we have for the closing session, I would just like to thank everybody once again for the work and especially the speakers and the collaborators that have made this possible. It goes without saying that obviously thanks to all the work from the FOSCEN team, this has all been made possible. Right now after this small video stops playing, we will have a very, very short Q&A, so questions and answers, but it will mostly be just me probably talking. And the reason we created this event, this very small event, is so that the system now, the FOSCEN matrix system creates a private room, well not really private, but the room for this event where anybody can join. That was done so that if anybody now wants to come and have a beer, talk about really anything, doesn't have to be Ada. You can come to that room and just have a talk with me or everybody else who wants to join. That way we can all have a nice small place where we can freely talk about whatever we want. Hopefully you all enjoyed, you the public enjoyed this presentation this day and if you have any feedback please send it to me. In this morning's introduction to Ada's bedroom, you can find my contact information and if you would like to just ask questions or if you have any issues or feedback, do not hesitate to send it to me or to the FOSCEN team. Thank you very much and I will see you shortly. Ciao! Presentations managing everything. This room, this presentation where we are will be opened after this Q&A and I invite everybody who has seen this to join. We can all have a discussion, chat and bring your beer. You have the opportunity to just mix with the public, mix with the speakers if people are still here. I would like to invite you all. Do you have any other closing words, Dirk or Ludovic? Just a remark, this is FOSCEN. Let's make it FOSCEN. I would like to publicly thank you for all the work you did in pulling this together. I know you spent quite some effort with this platform. It's a huge adaptation to do everything. So thank you. Thank you a lot for taking this initiative and pushing everyone to make it happen. Well, five seconds. So if you would like to stay a little bit longer, don't forget to you in this room. Ciao!
After this short closing event, a room will be open for anybody to join and just talk about anything.
10.5446/57025 (DOI)
Welcome everybody. This is the introduction to EDA for all those who want to understand what EDA is about in this world where we have many programming languages. The name of EDA comes from EDA Byron who was considered the first programmer in history working on Babbage machine. The first standard came up in 1983. Now that it's about the same time as C++, long after C, and just a few years before Python. So for those of you who think that EDA is an old language, it's no way older than many of current languages. It was the first industrial language to offer exceptions, generics, tasking, and many higher level features. A major improvement was made in 1995 with the introduction of object-oriented programming, protected objects for lower level inter-task communication, hierarchical libraries to better organize projects. Not that EDA was the first standardized object-oriented language about three years before C++. There was another standard in 2005 adding interfaces inspired from Java and many other improvements. In 2012, EDA went more formal, more oriented towards formal proofs with contracts, higher level expressions. And the history is still going on. We are about to issue a 2022 standard that will especially improve in the area of concurrent computations, parallel blocks, and parallel loops. EDA is a free language, which is especially important in the context of this first term. What does it mean? It's an international standard. It doesn't belong to any maker or to any company. It is freely available, and the working group that controls the language are user working groups. This is an extract from the IEAO standard, and it says, this document may be copied in whole or in part, in any form or by any means. The EDA standard is a free standard. And considering how expensive standards are usually, that's something very important. You can find the standard in HTML or any format you want, and you can read the standard. You have free compilers. You also have proprietary compilers. And many free resources, components, API, tutorials, and many channels to get help from. Here are some good sites, and also, of course, the usual community chats that you can find, like, for any other language. Some example of people who use EDA, the French TGV, the high-speed train, and Airbus, and the famous A380, very successful plane, if not commercially. This is the French line 14 of the subway, which was very successful to the point that it was adopted to renew New York's subway. And this is the Rosetta probe, which worked absolutely perfectly after five years in space, and also a night thing like the Kingcat luxury boat. And I don't have a picture here, but, of course, Ariane 5, and it was so successful that EDA was kept for Ariane 6 Rocket. So why would you use EDA? Well, we'll see that EDA has many features to make safe computing. So all those systems are systems where failure is not an option. But your own software would not fail easily. So having a language that will protect you from many flows like buffer overflows, arithmetic problem, illegal pointer, and so on, is a great help also for everyday programming jobs. The basic idea is that EDA checks a lot at compile time. And even design flows can, to a certain extent, be checked at compile time. So it's true that when you write something in EDA, the first shot in general does not compile. So the first reaction of people is to say, well, that damn compiler will not accept what I've written. And then after a while, you realize that there was a design error in your program, and that saved you a lot of debugging time. So many languages tell you, look at all those nice features, all the things that are allowed by my language. But in EDA, we tend to say, oh, look at all the things that are forbidden by the language. Because what's important in the language is all the checks it can perform, that is, all the things that it can forbid. So here is a picture illustrating the various parts of EDA. The basis, the concrete base on which the whole language is based, it is a classical procedural language that is with procedure, functions, variables, and so on. The syntax is based on Pascal because it was very important to have readable language, and Pascal appeared as the best basis for readability. On top of that, you have a very strong typing system, but really very strong compared to other languages. We'll talk a lot about that. Then you have a number of features, each serving a particular goal of software engineering. Packages are here for modularity, exceptions for runtime safety, generics for you, tasking for concurrency, and also a number of low-level programming features because EDA is used a lot in embedded systems and sometimes you have to handle peripherals, hardware, and so on. So it's important to have a clean access to the low-level features. All these are used to serve programming methodology. Sometimes you hear people saying, well, what's important is to have a good programming methodology, therefore we don't mind what coding language we use. In EDA, we take the position that the methodological aspect should not be abandoned when you go to coding phase. The language is here to express the methodology, and that's why you will find so many design errors with EDA because the concept of the methodology mapped directly to EDA concept. Therefore, if there is a flow in your design, it will result in a flow in a program that will be coded by the compiler. Of course, we are now proudly supporting object orientation ever since 1995. Then you have the small dog houses on the side. These are kind of extensions. These are optional extra packages used for certain special needs, but not for anybody needs. I'll give you a word about that later. All in all, when you first look at it, EDA looks like a very classical language with all the usual features. But when you use it, you will discover that it's more different than it seems in the way you use the language and in the services provided by the language. EDA has a building block approach. To understand what this means, let's compare it to PlayMobile against Legal. You know with PlayMobile, you have nice little characters, each designed for a very special purpose. So they perfectly fit the purpose, but there is nothing else you can do. If you have the PlayMobile circus, you can play PlayMobile circus and nothing else. Even the small characters are not always compatible from a box to another one. The legal approach is to provide pieces that are of no use by themselves. I mean, if you have just basic blocks like this, there is not much you can do with that. But if you assemble those pieces together, then you can make very simple things like that or very sophisticated things like this or even this. So the global syntax is derived from Pascal, as I said, with improvement to make it more readable. For example, here we have a for loop. It clearly says that C will scan over all possible values in type color. And it's protected. You are not allowed to change C inside the loop and C will disappear after the loop. So there is no way to cheat with loop control. You are guaranteed that C will scan over the indicated values and nothing else. You have also an example of a for loop that's quite classical. At the bottom, you can see that you can give a name to a loop. And if you give a name to the loop, it must be repeated at the end of the loop. And that's very convenient to match the beginning and the end of the loop. So this is an example to show you the effort of readability that has been put into the language. The if statement is very classical with several parts possible with else if. A special mention for the case statement that the equivalent of the switch case of C and C++. You can give ranges for the values. But all the values knowing the type of the controlling element here, I, you know, the compiler knows all the possible values. And the compiler checks that all possible values are given in the various branches here of the case statement or that there is a one others covering all of the cases. That's a very useful feature because if you change the type of I and if you forget to handle some new cases that you have added, then the compiler will tell you. You can also express not only basic values, but even structure values with with what is called an aggregate. So for example, if you have a two dimensional matrix like here, you can express directly the value of a matrix in your code. Or here I have a small example of a link list. So you see that the variable head. You create a new node. So you allocate memory whose value is initialized to 10,000. And the next element is another new node with some value and next is no. The idea is to be able to describe directly structured data without always getting down to the individual elements. So what do we call a strong typing system? Here I define the type age, the age of a person as ranging from 0 to 125 and flows in a building ranging from minus five to 15. I can declare and use variables of those types, assign values, but I cannot mix them. Even if the values are compatible, since there represent different abstractions, you are not allowed to mix them. This might seem very obvious. You've been told for a long, long time that you cannot mix Apple and oranges, but Ada is the only language that provides that kind of control for your own types, especially for elementary types. The idea is that whenever you define an abstraction, some kind of type resulting from your problem domain, like an age, a floor, these are elements from the problem domain. Those types must be mapped into the machine, and the machine only knows about bytes, ints, longs, floats, things like that. All other languages provide you access only to the machine level. The int in C or in other languages is just the machine level types. Therefore, you have to do the mapping, the implementation choice, to implement your high level types onto the machine. The new thing in Ada is that you describe directly at the level, at problem level, the types that you need, and that the compiler will do the mapping accessing the most appropriate machine type. And that will provide you immediately independence to the implementation. You don't care about the machine because the compiler is doing that mapping. Packages are a fundamental feature of Ada. They allow you to group into well-defined modules, data types, or associate operation to data. For example, this package, the Column Manager, provides you with a type known color, and that type is marked private. It means, okay, there is a type, that type is called color, but it's independent from implementation. You don't tell the user of the package how that type is implemented. Therefore, it's really abstract. You cannot have also visible type, like density, which is a fixed point type, by the way. So it's a type with regular values ranging from zero to one with a step of one over 256. You can define colors here. These are constant. So you know that you have a red, a green, a blue color, but you still don't know how it's implemented. It's independent. And you can provide operation, like adding mixing to colors, or changing the intensity of a color by multiplying it by a certain coefficient. Of course, the user does not have access to the concrete implementation, but the compiler needs to know how it's implemented. Therefore, if you define a private type, you will have a private part that will tell to the compiler the truth of the type. So here we see that the color is a made of three values of type density, and here are the values for the red, green, and blue constants. But that is not usable for the external user. It's private part to the compiler. And the operations will be implemented in what is called the body of the package. The user of the package has no access to the body. It's purely implementation. And to tell the truth, you can even use that package even before the body is written. Therefore, you have a complete independence between the specification, the abstract specification, and the implementation of your abstract data type. Here is a user of that package. If you want to paint something, when you use the color manager, you can define a variable of type color as half blue plus half red. If you want to change the color, you can use a multiplication, but not a division simply because there is no division defined on type color. Of course, you could have defined one, but here I didn't. So if you try to use an inappropriate operation, it will be rejected by the compiler. So abstractions are really enforced because you have that clean separation between what the user sees and how it's implemented. Note that when you use a package, you must name it in what is called a with closed. Every dependence between modules is clearly stated in those with closed. So it's easy to see the graph of dependency. And therefore, we don't need no dirty make file in Ada because the compiler knows what modules depend on what other modules. And when you recompile, the compiler is able to recompile only things that are affected by a change and nothing else. Another interesting feature of the language is discriminated type. This is something that has almost no equivalent in other languages. A type can have parameters called discriminants. You see you have discriminants that work a bit like parameters to a data type. Here I have a student recall and you have three majors, letters, sciences and technology. And of course, you can grade from zero to 20. Well, sorry, that the French grading system with numbers. Everybody has a name defined as a string and the length of the name is given by the parameter. Everybody has a grade in English or in math. And depending on your major, if you have a major in letters, then you have a grade in Latin. If you have a major in sciences, then you will have grades in physics and chemistry or in technology, you have a grade in drawing. So you have flexible data structures and which components or the size of components can depend on those parameters and the discriminants. The object oriented model of Ada is quite original. That's a typical use of the building blocks approach I described earlier. Packages support encapsulation. There is a special kind of type called tag type that support dynamic binding. A class in the usual sense is encapsulation plus dynamic binding. Therefore, it's built by assembling those pieces. There is no class reason of the word in Ada. A class is a special design pattern where you put a tag type into a package. So for example, here is a classical widget. I call the type instance because that allows me to write it as a widget.instance, which is quite readable, with an operation on the instance called paint. Then if I want to define a menu that inherits from a widget, a menu is a widget, I derive a new type, from widget.instance. So I said that a menu.instance is a new kind of widget.instance with some extra elements and I can keep those extra elements private. You can also have them visible, but in that example they are private. And I can redefine an operation for menu. Actually, this is a method. Note that I didn't talk about pointers. In Ada, object-oriented programming is absolutely not related to pointers. You can do perfect, good, object-oriented programming with pointers. Of course, we have pointers in Ada for length list, data structure, and so on, but it's not required for object-oriented programming, which also is a big improvement since having no pointers is much safer in general. In every type, tag type, define an associated type, managed by the compiler, called the class-wide type. So here I have a hierarchy of widgets. So I have windows, and I have a menu, and when you have pop-up menus, scroll down menus, and so on, classical inheritance tree. All those types are called specific types. These are the usual types. Associated to each specific type, there is what is called a class-wide type. So it's automatically generated by the compiler with the name of the basic type with tick class behind it. So widget tick class is defined as containing all the values of widget and all the types that are derived from widget. A type in Ada is a set of values. So the set of values of widget tick class is the set of values of everything, which is a widget which is derived of the whole inheritance tree. In a sense, in Ada, we make the difference between the widget itself, that the specific type, and the widget family, which is widget tick class. That's quite original and has lots of implications. For example, I can define a procedure to move an item, but the same algorithm is applicable to all the kinds of widgets. Therefore, I can have a parameter here of type widget tick class, which means that that procedure can be called with any kind of widget. And I can call arrays on that item, change its position, paint it again, and so on. If I define two variables, one which is a pop-up that instance, another which is a window.instance, since the values all belong to widget tick class, I can call move on either of these. And note that each of these specific types has a different implementation of arrays and paint. Therefore, at runtime, the compiler will call the appropriate operation considering the real, the specific type of the parameter. So if it's a pop-up, you will call the arrays for pop-up. If it's a window, you will call the arrays for window. That's dynamic linking. Note that in starting from it at 205, if you prefer the object.method notation, you can move the first parameter before the operation, roughly the same way as you would do in Python. Starting from 205, interfaces have been added that work a lot like Java interfaces. The principle is the same, a type and be derived from one full tag type and several interfaces. Interfaces are kind of abstract type, but there is a small improvement over Java. They can also have no methods, methods that are defined as doing nothing that serve as optional methods, for example. For example, if I define a persistence service and think that can be read or written, then if I want to implement that interface, I must provide a read and a write method. Then I can have a persistent window that claims that it is derived, it inherits from window.instance, of course, and that it implements the services of persistence. Ada has exceptions. Exceptions are, well, now most people know about exceptions. It was not so common when Ada was first released, but now many languages provide exceptions. A big difference is that exceptions were built into the language right from the start. So everything that can go wrong at runtime will raise an exception. In Ada, we don't say we throw an exception, we say that we raise an exception. The model is more like a trap if you want. So even errors that are codes by the runtime, like accessing outside of an array, buffer overflow, dereferencing an old pointer, or even device malfunctioning or memory violation that can happen almost in only imported code, all those things, everything that can happen bad at runtime will translate into an exception. And every exception can be handled. The mantra is that once you've taken care of the unexpected, you must take care of the unexpected and expected. The nice thing with exception is that you can have one others that allows you to handle even everything that is totally unexpected. For SIF systems, systems that have to work 24-7, it's very important to handle by program everything that can happen. If you have a probe or a satellite orbiting the Earth, you certainly cannot go to it and press control as there. The software must handle everything. Generics are a way to provide reuse. The trouble with strong typing is that if you design an algorithm for a certain type, you cannot reuse it for a different type. So generics are a way to provide algorithm that can be instantiated, used on any data type, provided that the data type features a certain number of required properties. So here is an example. If I want to swap two variables, well, apparently you know the algorithm and that can be used for every variable. But that's not true because in EDA, for example, we have what is called limited types and limited types do not have assignments. They are strictly controlled, so assignment is not available. So you can swap variables if the type provides assignment. So the syntax here tells that you can instantiate that swap generate on every type which is at most private. Private type have assignments, but since it doesn't say limited, you cannot instantiate these generic unlimited types. And the compiler will check that the feature you use here are always available for a type that matches the requirements expressed here. So that's a big difference, for example, between EDA generics and C++ templates. C++ templates are checked when they are instantiated. Here there is a contract and if the type you provide matches the contract provided by the generic specification, then it will always work because the contract is checked for the implementation according to the required properties given in the specification. Such a generic is instantiated this way with, so here I am writing simply procedure swap edge is new swap with type edge, which means that the compiler will write for me according to that template a new procedure where every time the word item appeared here, it will be replaced by edge. So the instantiation, you can understand it as a kind of macro, but it's a semantic macro, it's not text substitution. Tasking is an integral part of the language. It's not an external library, there is syntax that define tasks. Task is the EDA term for threat. So, and this has implication in the language because it means that every implementation of EDA is required to take care of tasking and for example, all libraries are always re-entered and so on. Tasks are high level objects, they are declared like variables and you can make a rave of tasks, you can have tasks that are part of records or they are handled like usual objects of the language. I won't have time to go into many details, but you have high level communication and synchronization features. The rendezvous implements a client server model where a client calls services provided by a server task. It's a very high level concept and it's quite easy to understand. For some lower level communication needs, you have protected objects that work like passive monitors and that do not, for simple cases, it avoids requiring very small tasks everywhere. And honestly, tasking is very easy to use. It's very easy to add little tasks to display a clock in an area of your screen or thing like that. In many languages, tasking is hard to use and people are reluctant to use it because if you have to deal with these semi-fluoros or low level things like that, it's a pain. Here, it's very easy to put small tasks to make your task easier and to use them. And as I mentioned, because Ada is used into many embedded systems, you need to access the low level. And the principle is to let the compiler do the hardware. We have seen those two levels. The same thing when you need to access the low level. First, you define a high level view. For example, I have a recall that could be some data acquired from a device with a Boolean telling that the device is on, a count of samples taken by the device, and a status that's an array of Boolean, for example, the state of various switches on the front panel or something like that. Then you will describe the low level. You can have what is called a representation close that will specify the exact bit by bit representation of your data. Here, I'm telling that on is at word zero, there's a range of bits zero to zero. So it's bit zero of word zero. And count is bits one to seven, and status is bits eight to fifteen. So in C, you specify, bit fields just specify the size. Here, you leave no freedom to the compiler. You have the exact bitwise representation. So the principle is once you have described what you want at the lower level, you still work at high level. If you want to change a bit in your bit array, you write quite normally mi.status indexed by three colonic called false. And that will translate into the precise machine instruction that does exactly what you want. So it's always the same principle. This time, you don't let the compiler choose the representation. You impose a certain representation and the compiler will do the bridge between the high level and the low level. If you want to go really low level, there are many things you can do. For example, here, patching memory is very easy, neither. But you have to tell what you do. You define a model of your memory. Here, for example, a storage array. Well, I take the famous 64... 648 k bytes. And I say that this array is implemented at memory address zero. So it will match directly the memory. And then I can write the peak and poke procedure simply as high level constructs. But since I have imposed the actual address, this will match exactly my memory. You can also include machine code. You can handle interrupts. You can do lots of low level stuff with Ada. But you have to tell it clearly. You don't divert a function like a pointer to mean that you want to access memory. You have to do... to tell what you are doing and to do what you are telling. That's the basic principle. No, nothing is hidden. You have to be explicit. A word about those annexes. It's an extension of the standardization process. You understand that you may need many different features for various specific domains. And since Ada is a certified compiler with a certified validation suite, if everything was put into the standard, then you would need every compiler to implement everything, even if the users don't want it. So it's an extension, but that's an extension only in the form of packages, pragmas or attributes that are special values provided by the compiler. And so this provides extra features, but no language change. That's what I mean. That are intended for specific domain. So there is an annex for system programming, for example. There is one for real time, one for distributed systems, which is very interesting. I made a presentation about the distributed system annex in a previous FOSDEM, so you can retrieve it from the archive. Information systems, business systems and that kind of system, numerics with guarantees on the accuracy of computation. I also gave a presentation on that and safety and security annex for high safety system. So what else? Ada is really a portable language. Really. I mean, all those tools you find with all other languages like configure, automate, conditional compilation just compensates for the lack of portability. If you need conditional compilation, depending on your target, then it means that even if your program works on a number of different targets, nothing proved that it will work on another architecture. Here, because you work at the high level in the compiler bridges to the low level, you don't have to make variation depending on the target. Even the virtual machine concept is really a workaround. It's, you know, Java, I heard people claiming that Java is portable. Java is not portable. Java works on only one machine, the Java virtual machine, and you have to emulate that machine on other machines if you want to run it, because it's so unportable. Being portable means that the program will work on various machines without changing the code. And that's free what Ada has achieved. Well, of course, we also have compilers for virtual machines if you want. But here, because there is a validation suite, all compilers implement exactly the same language. They are checked and they must work the same on every machine. And those high level constructs re-protect you from the differences. So believe me, I did that. Many people experienced that. You can write a program that's 100% the same between Linux, Windows, embedded systems, usual mainframes and so on. To the point that many programs have been developed on mainframes because it was easier to develop there, then move to embedded targets without any major problem. Ada was designed mainly for ease of reading, because the program is written only once, but read much more often than it is written. But it doesn't mean it's bad for writing. So there are a number of things that will help you when working a program. First of all, especially Gnats, there are very good error messages. I keep telling people, please read the error message. With other languages, in general, the error messages are not very useful. I mean, you all have seen an error in C syntax error near line 23. What does it mean? What that compiler that cannot tell me if I miss a semicolon or a closing parenthesis? With all Ada compilers have very good error messages. This is an example, for example. Here I made a spelling error in the name of a variable, and the compiler tells me possible misspelling of lines, because it made a substitution, found something that was close enough that would have fit at that place. And it can even fix if you are used to equal instead of column equal for assignment, then it will tell you. And sometimes when you see the error messages, you wonder if the compiler is reading your mind. That's really, really advanced. The features of the language will protect you from many mistakes, and you have to experience that to really believe it. So you will have to say that. And especially strong typing. Strong typing, the way it's done in Ada, protects you from many, many confusions, like inverting parameters in subprogram calls, confusing a pointer with a pointed at data, although the error that can take hours of debugging to find just don't pass the first compilation. And as Ada proponents usually say, if it compiles, it works. Well, of course, it's a bit exaggerated. You still may have algorithmic errors. But you don't have those stupid coding errors that you encounter in many languages. And so once you get used to the language, because of course it needs training, you need to understand how it works. But once you pass the first step, you have that nice feeling that you spend your time on designing, thinking about your problem rather than just chasing stupid bugs because you confuse two different variables. You have a number of components and tools that are available. An interesting point is that the standard defines how to interface Ada with other languages. And you have even tools to automatically generate bindings to libraries written in other languages. And that result is that all the components you can use with other languages are available with Ada. Plus someone that are unique to Ada, some example AWS, that allows you to put a web server inside your Ada program, and then there is aziz, which is a great tool to make, to analyze this tool for Ada code. And you'll find a number of that on good sites that summarize everything that's available. So to conclude, you could expect me to tell you, oh please use Ada. No, I won't tell you that. You are grown-up people, you are free to use whatever language you want. But I will just tell you, please try it. Give it a chance and see what it really means to be able to think at a higher level, think at the level of your problem, and forget all about those nasty implementation details that are now handled by the compiler. Believe me, that's very nice. Thank you for your attention. Questions and answers, you have more questions. The room where this conference, this Q&A is taking place will be opened, and you can come here and ask and have a longer discussion if needed with GMPF. So the first question is regarding the type system, and Pavel asks whether Ada is the only language or probably the most widely used language that has distinguishable numeric types. Well, I don't know all the languages, and when you talk about languages, you always can find some other languages doing the same thing. Actually, as far as I know, it's the only language where you express the high-level notions. Of course, you can embed types in records and make your own types the way you want. But here, it's more the need of expressing directly the numeric types that belong to the problem domain that make it quite unique. And as far as I know, I don't know of languages that make it that simple. Of course, everything is possible with any language. Thank you. This question was already answered, but I would like to ask it nonetheless. Are there any tools to visualize the packages, links, and graphs on dependencies? Probably, that's what they meant by that. Yes, well, you have many tools. With Nat in GPS, you have tools that can visually display all the links between the dependencies and so on. Thanks to the WISC closes, it's especially easy to do in Ada, because all dependencies are explicit. Thank you. We have a question from Christoph. Do you have training learning sessions planned shortly? I can answer that, and then JMPI probably has an offer to you. We have another introduction to Ada, but that one is more, I would say, technical, less introductory and assumes some programming knowledge. It's going to be given this afternoon by Paul Jarrett. It focuses on if you come from another language, how do you translate concepts to Ada? And I think that will help you a lot more. We also have another introduction for numeric types that JMPI will give in half an hour or so. And there are learning resources, but no, for this conference, we do not have anything more than that planned. However, I think JMPI can offer you something else. Well, during that conference, well, I don't know the whole program, of course, you can watch it again, I think, you can replay. Otherwise, you will have me speak. I think, Anando, you know the program better than I do. Now, if you're talking about a formal commercial training, of course, it has a lot to provide with that. Okay. Thank you. One question from Pavel. First of all, I think I heard about range or even singular types in the past from other sources. Second, even in C++ or REST or even in C, you can always wrap the numeric types as a single element of a structure types. Sure. The second problem is not so convenient, but still, what would you comment on that? Would you say that Ada makes it more accessible or clear? Or what are the pitfalls that other languages have compared to Ada? Well, I partly answered that because it's partly the same as the previous question. But yes, you can do anything with any language. The question is how easy it is. Okay. It's plain arithmetic. You define types, you define operations, and you use them. And the fact that it's bundled into the language pushes people to use it. What's important is that if you can do things, but it's too complicated to do it, people will just not use the feature. Here, it's natural from a physicist's point of view to define units and to have operations between units. And you don't have to define classes and methods and a whole lot of complicated things just to make sure that you don't add a time and a length. So yes, you can do it, but it's in other languages. But if it's too difficult to inconvenient, people won't do it. Okay. Thank you. We have a question or maybe a comment on tasks by Aldi who says, I found that it is very hard to restart the tasks in Ada. I can easily start and stop a task, but not restart after the task has stopped. Is there a pattern that's recommended or a utility in the Ada language that you would recommend or indicate to use? I don't understand exactly what you mean. I mean, a task, when a task is terminated, it is terminated. It makes no sense to restart. You may restart an identical task, executing the same code, but it's a different object, you know. So yes, it's very easy to restart tasks. You just declare the task and it starts, or you can have pointers to a task and then you create a task with new. It just restarts. I mean, starting in task dynamically is very easy. Is it the same task or an identical task? Well, it depends on what you want to do. I don't know exactly what you had in mind there. Okay. Another question from Wilbert. In your experience in working with Ada, what are the main impediments companies run into, have to overcome, if they are switching to Ada from, let's say, C++ or something similar? Okay. I would say abstraction. Get away from the computer. In my training session, I always tell people, stop programming. If you use it correctly, you have great ways of modeling the problem and not thinking in terms of computers, not thinking in terms on how do I use my computer to do so and so, but really thinking in terms of the problem domain and expressing the problem domain. So it's really a different state of mind. Ada is more different than what you may think from this point of view, because you really have to, as I say, forget computing and just turn into modeling the problem domain. That the main difference. Once people have understood that, then they feel very easy because it's so nice to forget about the computers. Okay. Thank you. Another question from Aldi. Any free code checker and what code and standard to use for the Ada language? Any recommendations? Well, as coding standard, there is a document which is quite old now, but that for me the basis of most coding standard. It was, what was the name was? It was the Ada, boom, boom, boom. I just forgot the name. But it's the basis for most coding standards. Yes, you have two checkers and Ada has one called Ada control. Ada has one called NAT check and the number of companies provide coding checkers for Ada. That's easy. But, well, you'll find a lot of literature about that. I don't think I can get you out of the, give you every point out of the top of my hand, but a quick Google search will find you a number of standards. Okay. Do you have any recommendation for a code checker? Oh, mine, of course. Ada control is, well, I'm not in a good position because I have invested interest into that. But certainly Ada control is one of the most sophisticated ones. NAT check comes, well, used to come bundled with NAT. It's not provided anymore currently. We can hope it will return, but it's not currently. And the other are not free. Ada control is free software. It's important to let them know. And as far as I know, they can just download it from your website. Yes, yes. It's free software. Then we have only 25 seconds left. Once again, I would like to thank you, Jean-Pierre, for your tremendous presentation. And if anybody has more complex questions or questions that did not get answered, the bot should now publish this room and you can join this room and talk to Jean-Pierre directly and have a nice reconversation directly. Okay. Thank you.
An overview of the main features of the Ada language, with special emphasis on those features that make it especially attractive for free software development. Ada is a feature-rich language, but what really makes Ada stand-out is that the features are nicely integrated towards serving the goals of software engineering. If you prefer to spend your time on designing elegant solutions rather than on low-level debugging, if you think that software should not fail, if you like to build programs from readily available components that you can trust, you should really consider Ada.
10.5446/57026 (DOI)
Hello and welcome to the ADA-Deb Room. I'm Fernando Leo Blanco and I've been the main organizer for this event. This has also been organized in cooperation with ADA Europe and ADA VLG. Before we get started, I would like to thank all the people who have made this possible, mainly the collaborators, especially the Ukrainians, who was the main organizer for this event for over a decade, Jeffery R. Carter, Ludovic Prenta and Tamo Maglin. And also the speakers. They are the ones who have created these presentations and who have put a lot of work into them. And we cannot forget the FOSDEM team. They have been the ones enabling us to come together here today to share a nice day of presentations and questions and answers. And at the end of the day, hopefully also a bit of a beer. I'll tell you more on that later. So in cooperation with ADA Europe and ADA VLG, they are two organizations who promote ADA. It's used within the community, academia and industry. You can learn more about them if you go into their web pages. Their links are present in the presentation. So you can just click on them. And as said, they promote the usage of ADA. You can take part of them. They make the community come closer and they also organize different events and also academic and industrial papers that get shared in the case of ADA Europe quarterly. By the way, ADA Europe still has the industry track open for its conference in summer, I believe it's summer, in Belgium. So you may want to take a look at it and see if you want to present something there. If you would like to take part in their organizations, you can take a look at a PDF, a paper that's linked in the FOSDEM page for this presentation, for the video that you are watching right now, which includes the different costs and organizations that come with joining the organization and how you can do that and also what they offer to you as a member. So a short introduction about myself, since this is the first time you are probably seeing me. I'm Fernando Leu Blanco, a master's student mostly related to mechanical topics. And regarding ADA, I've mostly worked on updating ADA in NetBSD and I've also made, I believe, the first GCC ADA compiler running on PowerPC on NetBSD. I also showed ADA and Scheme Interop, you can click on the link if you want to see that. And I've run ADA on a synthesized RISC 5 core on an FPGA. So that is also public and the information is readily available. So this is the first time ADA is taking place in an online format and I would like to explain you, the public, how that is going to take place. So if you go to the ADA Debrune webpage at the top right, you have three links, video with Q&A, just video presentation and chat. The link I would recommend you click is the chat link that will open, as I will show you in a second, the element interface, which is the main interface for this entire event. The video and the video with Q&A links have a video link that you can open with any video player that can play videos from the internet. However, they are only the video, they provide no interactive medium with which to ask questions or discuss topics. For that, you should click on the chat link. Once you click, it will open the element interface and here you will see a few parts. On the left, you have the different rooms that you have joined and where you can discuss topics. On the center, which is the most important part, you will find the information that's taking place right now. At the top, you will have the stream where the videos for the presentations are going to be played and after the video has been played, the Q&A session will take place. At the right of the video, you have the questions and answers panel. The questions and answers, sorry, the questions can be asked by just simply typing on the chat system and the questions that get the most upvotes, the most reactions to them will appear on that panel and the more reactions they get, the higher up the list they will be. So the moderator that will help the speaker with the questions and answers can see which questions are more relevant to the community. In order to give the thumbs up to questions, you can react to a message by hovering over the message and at the right, a small set of boxes will appear where you can react to the message by clicking on the emoticon icon. We also have a manager. It's an automatic bot for the chat system, which will be announcing which talks will come and an event is going to happen. So it will announce when our questions and answers is about to happen, when a new talk is going to start. So that will help you, the public, know what is going on right now in the screen. We have a full schedule, so I'm very glad about that. We have talks that range from the most beginning and major topics, from introductions and first experience with ADA to the more advanced topics such as Spark, which is the very viable subset of ADA. We also have talks about tools and projects that have been developed with ADA. So finally, have as much fun as you can. Please ask questions, also ask a question now if you can, so I can see whether the system is working correctly. It doesn't have to be anything important, just send something and let me know whether this is working or not. Sadly, beers are not included today, but you can bring your own. And at the end of the day, we will have a closing session and after any talk, sorry, I should have explained this earlier. So after the questions and answers takes place, which will take place in a second for this presentation, after the questions and answers takes place, a room that was previously private will become open. That room is controlled by the speaker. And the speaker, if they have more time, they can join that room and anybody from the public can also join that room. There will be a chat system, just like the one that you should be seeing right now, and a conference system, just like the one you just saw for the questions and answers. You as the public can join the conference system so that you can have a direct conversation in real time, voice to voice, with image, with the presenter, with the speaker, if they have time. You can also write on the chat if you would not like to join the call. Those videos, the presentations, the talks that happen in those rooms that are specific to those specific talks and presentations are not being recorded. Only the Q&A will be recorded and it will only be recorded in this main room where all the presentations will take place. So this video stream will contain from this presentation, the introduction to the AIDA dev room, to the very last one. And you only have to join the specific rooms of each presentation if you would like to ask more questions to the speakers. Those rooms, once again, are private up until the questions and answers are done. The bot, once the questions and answers time is finished, will publish the link for anybody to join those rooms. If you missed the link for that room, you can go into the presentation, into the FOSDOM page for that presentation and click on the chat link that will be present there. So going back to the closing session. After the video is played and the Q&A for that small session takes place, the room will be opened and I am inviting you all to come together and just join that new room that will be created and will be open to the public and just bring a beer and half a chat. It can be AIDA related or not. You decide. I hope to be there for a while and have fun. Enjoy the presentations and ask as many questions as you can. If one is not answered during the official Q&A, if the speaker has time, you can always ask them in their own room. See you. So the Q&A using the prompt should have started now. Hopefully that was done correctly. Otherwise, I need to tell the speakers to correct that. But by the way, I see a question from Janik saying whether we can see the question. And yes, we can. You can see that it has two uploads. So that is wonderful. If anybody has questions on how the system works, on some of the dynamics that are involved, you can do that now. Another bit of trivia that a lot of people don't know, both speakers and you, the public, you can download also questions if you don't think they are that important or if they are duplicated. If you download a question, one point gets taken out of it. So that way you, the public, can more or less democratize what questions get to be at the top or not. I can see that, for example, now we have three uploads for the question from Janik. Also regarding the dynamics of the rooms, as I've said in the video, once the question and answer time runs out, the room where I'm in should be opened to the public. If you want to test whether you can join this room and how the room would look like once you join, you can come here and just say anything. And I can tell you whether it's working or not. You will be able to chat through text with the speakers and through this conference system that we are using right now. So I hope that everybody is also seeing the live stream correctly and no issues are taking place. I think we started, the system started the presentation a little bit late and hopefully the Q&A prompt that we get when it starts was more or less correct. Also, I need to see whether this Q&A stops when it should stop or if it gets cut before a hunt. And I think it's going to get cut before a hunt. If that happens, please join this room and tell me, yes, the Q&A session ended abruptly before the time where it should be. This Q&A should end exactly at 10.15.00. So we will see that. So anyway, now we will have the presentation from Jean Pierre and he will present, he will give his very well-known talk about the introduction of ADA to both beginners and experienced programmers. Really good. So I got started into this.
Welcome to the Ada Developer Room at FOSDEM 2020, which is organized in cooperation with Ada-Europe and Ada-Belgium. This year marks the first edition on which the devroom takes place on an online format. For that reason, this presentation will explain how does it work and how can the public use the systems provided by FOSDEM and interact with the speakers. We will also introduce the Ada-Europe and Ada-Belgium organisations. This small introduction also serves as a test to make sure the systems are working as expected and that any questions that may take place from the public can be answered.
10.5446/57027 (DOI)
you Hello, I'm Stefan Hild and I discovered Ada around three years ago and started to teach it myself using this tutorial from Fikibooks. I clearly recommend it. In my opinion, it's the best online tutorial out there or the best I had found and also use the Rosetta code and can you give good solutions for small tasks you maybe need and because you want to learn a programming language you have to grab something. I directly start to program and role-playing game because if you don't know much about a language and don't know much about game no-learning you start of course programming a game. Yeah, it's this was good. Exactly look like this and but it compiled. Yeah, don't talk your game. Okay, don't talk about this. Don't ask questions. I can't answer them probably and later in 2020 I started to program a new game and turn-based game at first that later became a civilization clone civilization like game first as a console with to make the background systems and recently I started to switch to the SML. It looks not so much different but I'm working with it and I hope I can sell it in one day hopefully. Yeah, and I also stream the whole development of this game on my YouTube channel. Oh boy, made much and also on Twitch and no one is forcing me to say that I'm streaming it. No one. No, don't worry. No one is asking, forcing me and now I want to talk about some advices and few things I discovered while programming a game without knowing too much about Ada and game with mostly about Ada. And we begin with some basic things. At first, don't make a spark game. I tried it at first, but I give up at some point because you change often things and then you have rewrite all the contracts that spark want. Also, you probably need for game features that spark don't have like random numbers or I know spark don't like random numbers and give my game big part of spark wouldn't be a feature I hope I can put in later in the future version, but at the moment it's for me it's more important to make a simple game in Ada than a game in spark. But what I would advise is to go in your project settings warnings and put them all on except the body information here. Probably there's a news I don't have for it. But the thing is, sometimes the option will give you an annoying message and you have write a minute work a minute on your code to fix this message. It's annoying, but it's still better than don't have them on later discover you have a problem and you have to change big part of your code because you have to fix this error and now you don't need a minute. You need a minute and several days. Yeah, so I give you advice put him all on except if you want really fast make a game but then maybe Ada is the wrong language anyway. Also the validity checks you can make them all on and never a problem instead and some style checks you probably the nesting thing I know you can nest or nest many things in Ada packages and all the stuff but just because you can don't meet your should and it will become unreadable if you're nesting to keep so it would give you a limit. Also if you use not V8 not W8 as a command you can use some UTF-A symbols like I did in the console version with the pointer directly your code is a really nice thing and if you want to add libraries that you have here I searched again. Yes, it's dependencies and you have to add your library here don't put it directly in your project. I did it wrong at first because I don't know what I'm doing most of the time and yeah so this is some basic things that I think is a really good advice when you start also and now we switch over to some string handling that is really annoying Ada sometimes if you do it wrong if you don't know it and yeah. Yes string handling in Ada. Ada has many several types of strings and if you use version of Ada as a 2005 or newer then you have the normal version of the string the white version and the white version and most of them you can completely forget if you program a game and just can use unbounded white string because you don't have to worry how long your string is I mean it's of course not completely unbounded but it's long enough to store files from text or user input and also you don't have to worry if you change your text if you translate it if you rewrite it whatever you do is probably really often and also if you program a game or at least for a modern system a game for a modern system you don't have to worry about memory use I mean of course the white version use more memory than the white or normal version but it's just background noise on a game I mean the map of my game there's all the information about what is what saved it's it's most of the memory and it doesn't contain any strings and it's just not important so you can just use the white version and as a completely complete complete utf a into it so you don't have to worry about special characters if you translate it to another language like german and cendro sasa things like air and in a long case you probably need some strings why why strings we exactly know how long it is but this is not so often or at least yeah not that I use it so often because you have to worry about the length and it's really annoying if you have programming a game at least also you probably need the unbounded to unbounded by the string to avoid string because many things like the sml once and string another unbounded thing to give it through and yeah you have to convert it and also this probably needs some calculating power but it's also not important for a modern game and also you should use white white character if you use characters for some reason for example to see because you can directly use the complete utf a and it's really really good um sometimes you will need other strings and this is the point where you get really annoyed for example if you use the directories because this is reduced into a 2005 if I correctly informed and it only has strings and then you have to convert your string to an unbounded white string or something to work with your internal structure and it's really annoying transform strings and then it doesn't have um special characters like and and you have to read me a file it's it's really annoying so acceptances only use white white string and mostly unbounded because whatever you think you can save in memory with smaller strings it's not important in game it's just annoying to transform your string so it can hold the correct characters and yeah don't make your life worth just use unbounded white white string and convert it all you need it it's the easiest way I think good now we had some string handling so let us do more string handling um yeah you will have many strings to handle um this time the image command or the white white image version of it the white white version of the image command um probably needs this if you have some numbers and want to print them out as text on your game I'm not sure if you can read this I can make it I can make it bigger but don't don't don't do that you got um and yeah um you have to know several things about that uh at first it's important to know if this thing is positive or bigger equal zero it will get an empty space before head and if it's smaller it's then you get your minus um so you have to trim your string white will kill your formation of the text in your game and the second thing is if you um do it with float using it with float you get um version of it like instead of 1.00 you probably want like 1.00 move points you instead get normally um the scientific version of it and it's it's important and yes this is probably not the way you want to show your um moving points so you can convert it um but bring some to some weirdness of the library at first um you have an white white textio version and then white white integer textio version but the float white white version is a Gnat internal unit so you have to make your own package right from white white textio or you can use float textio but this is only for um normal strings so you don't want to use it and also um you have to Gnat and yes then you can use your put to change it and can say hey i want it like said what a little bit problematic if you start programming because it first thing yeah put i use it um but put print something on the console i think it's set put i'm not sure because there's no comment and actually there's an x put or also this is not the correct put no no this put um if this put you can convert your um float text into um normal number text also from this to this and it's a little bit complicated if you're beginning programming because someone say yeah just use put but put print on console and then you scroll down and find out oh there are more puts without any comment and yes it's done a library law to do this like you have 8 000 procedures called put in 300 files and it's sometimes confusing so if you read hey use put for that um and it doesn't work use probably the wrong put although it's a procedure of course because a function wouldn't be yeah give information in and get one information out yeah you don't use function for that um also no comment and as i know aida can have up to 200 characters for procedures and variables and all the stuff and yeah they named it all put also they named many things get um yes it's it's sometimes really confusing and i give you advice i know you can do this in aida like name anything the same if you and also the um name of the parameter you get in as long as the tip is different aida accepts this but you should not do this in your program it gets confusing especially if you don't work for a longer longer time with this part of your code and then you have yeah many procedures with the same name or nearly the same input parameter names and don't do this it's you will hate yourself believe me um i of course i've never done this but i heard of people yeah so it's a little bit confusing i thought this was important to mention because i need a wider figure out which put also many things and yeah so don't or at least if you do it comment it so you can understand it later it's problematic science so um not completely string handling string handling but a little bit yeah you handle many strings in games um input output um aida has many many different things at least four different types for reading and writing data um for game probably is two important things text i o and stream i o text i o i o of course for text and only for text um for example you can read text you want to play in your game and different language and all this stuff and for anything else you should probably use stream i o at first i try to put my database in some text file or simply to editing file using other methods and as i'm really too dumb to do it or it just isn't just is so complicated but i had a really big problem so at some points i was so annoyed i changed to stream my old to just write it there and write later an editor for my databases i have still to do this um because it's really annoying to not do it so if you don't have only text or something as you probably use stream i o you will have this for your unit database or something and yeah also if you read your text of course only read it into an unbounded white light string and yeah don't work around like yeah but i can save 20 kilobytes if i use this crazy method to read my data and write it um it's it's it's it's still not important in modern game and it's yeah you will not save memory with it probably not no it's many other things in your game will eat up some in memory and that this is no longer not important yes this was my talk about my experience and some advice about adan and gamemeth um originally playing to put more things into it like why it's important to make games with adan and things about this tasks system and so on but it took me longer to talk about this stuff than i thought so maybe you can ask me the kune or maybe i make a video on my channel later could do this and yes i also want again to um say that i stream all my stuff programming stuff on youtube damn so much streams and on twitch and i'm still not forced to say that i stream all this stuff no no one is forcing me no don't worry and yeah so have fun and bye well we should be now live so thank you for your presentation stefan and now we get to the q&a but before we get to the questions from the public i have a personal one and that is how did you learn about data how did you find out do you have any previous experience in other programming um and why did you decide to do a game uh with with data that's that's an interesting story uh i'm not sure discover adan i just found it some day and looked at it and i liked it and start working with it but i try to make a game with other language before like c c plus plus or engines like unity but i really don't like the c syntax and also the compile and this language is not really helpful with his messages they are long but meaningless for me um so i never found the right way to work with this language but so it is current adan and i just started crying it out like some other language and yeah somehow it sticks okay very interesting and how long have you known about data or worked with it since the first time it's three years ago but i probably start working more with it around two years ago for some time problems and stuff okay so we have a comment from stefan carrez who says yes the strings are annoying in adan compared to some other languages um i thank you for covering that topic um because a lot of people don't really know that much about the strings or are unaware of the complexities of data and especially for showing the example where a user can just name a city or something and they can just provide any symbol or or issue was that a major issue with the language that you had to fight through or at the beginning was the beginning was a problem because many tutorials just cover the normal string or normal asky signs and you have to figure a bit out for yourself especially if many things have all the same name in the standard library like the 5000 puts you have in there and but after a while you get it and if you yeah be consequent with using of wide wide strings then it most time works okay and i never see are using a library for for the game um how did you discover that library nowadays we have this package model here um known as alir that manages dependencies and finds out different projects in the internet are there are indexed how do you find the the i believe sml asml yes yes i've been first searched for the for some libraries like the stl or sml and then you can just go under bindings and then there's a list of bindings and one of them was the asml and because it was active maintenance i chose this i know there wasn't binding for a legal for but links seems that and i'm not sure the state of the stl binding so i chose this one oh okay okay thank you and what uh so you showed two versions the terminal version and then the graphical version um so i i suppose that's when you start using the sfml library to do that transition um however for the terminal version uh where are you using just the standard strings and string manipulation tools that eta provides to draw things on the terminal uh yes this standard thing and some csi um this extended terminal commands this is why it doesn't work correctly in windows because it's disabled standard i think but looking good graphically wasn't the main purpose at the beginning because i just wanted to integrate the um background systems and all and it's probably easier if i don't directly start with another big library like the asml and see if this is work so we are nearing the end of the qna um i would encourage all the speakers to come to the room the bot will publish the address it may take a few seconds it seems that it has a bit of a delay so be patient please um and you can come here to this room and have a direct conversation with sephan if you want so thank you sephan for your efforts and see you
In 2020 I started live streaming the development of a turn-based strategy game. At that time I had little idea about Ada, programming or game development (nothing has changed about that to this day). But by September 2020 it had taken the early form of a Civilization clone. After more than a year of development, it has become almost a real game with its own features. And now I'm going to talk a little bit about some experiences and weirdnesses with game development in Ada.
10.5446/57029 (DOI)
Hello, and welcome to the Outsiders Guide to ADA. I'm Paul, a software engineer who normally writes C++. I've been using ADA for some of my hobby projects for almost a year now. I've written in quite a few programming languages, and not many people are familiar with ADA. So this is an overview of the language for programmers from other languages, from the perspective of someone who doesn't write ADA professionally and hasn't been working with ADA very long. This is not an advocacy talk. I can't teach you ADA in 30 minutes, so instead I'm going to show you its capabilities and also how its parts work together to provide a different experience from other languages. I think it's important to show real code, so most of the code samples I'll be showing is real ADA code from the projects I've written over the last year or from other open source projects. I am not a liar, so seek competent and licensed legal counsel before making any business or legal decisions based off of information in this talk. ADA is still in use today. It was released in the early 80s, followed by several revisions, making it a contemporary of C++, not a Fortran and Cobalt. Whereas the lineage of C++ can be traced roughly from ALGOL through BCPL and then C with similar influences, ADA comes from a parallel track from ALGOL through Pascal. The Pascal influence can be seen in the use of colon equals for assignment, which is a statement not an expression like in C++. It also uses single equals for equality, slash equals for not equal, and begin and end keywords instead of curly braces for blocks. In addition, due to the language not being built around object-oriented programming, there are some interesting consequences on program organization and structure. While several paid compilers exist, there is the ADA front-end to GCC, which is called NAT, which is free as in beer and as in freedom. The free software foundation's version of NAT provides a GPLv3 license with a runtime exception, so I've been writing and releasing my code under the Apache 2.0 license. For simplicity within this talk, I will be treating the NAT ecosystem and ADA as interchangeable, as NAT is the predominant open source variant of ADA. In addition to a compiler, NAT also includes a pretty printer for code formatting, a document generator, and other tools. ADA contains a subset called Spark, which provides the capability to formally verify subsets of your program enabled via the Spark mode aspect. There's also a relatively new package manager called AlliR. This enables you to install the NAT toolchain on multiple systems, create, publish, and download crates, which can be libraries or binaries. If you're familiar with Rust, this tool fulfills many of the roles of Rust-up and Cargo. There isn't a considerable open source presence for ADA. The Tiobi ranking has it at number 34. Language, which looks at the number of GitHub issues, pull requests, and stars has it at number 163. ADA doesn't show up on the RedMonk programming language rankings, and it does not appear on the Stack Overflow Developer Survey. The Black Duck open hub shows Rust as having 10 times the number of contributors of ADA and about the same amount of code. The popularity of programming languages metric shows ADA steadily gaining, though there's some question of if this is generated by ADA the language or by Cardano the cryptocurrency, which calls its token ADA. Most ADA tutorials describe the language, but not a possible environment for working in it. I developed an ADA on Windows, Mac, and Linux. On all of these platforms, I used the same method. I installed a free software foundation's NAT front end to GCC using Alier, and then used NAT Studio or Visual Studio Code. Both of these editors are good. My personal preference lately has been using Visual Studio Code. There's an ADA language server, which some people use with VIM, but it also has an extension for Visual Studio Code. For VIM users, there's also a group of plugins you can get called the ADA bundle. I use Alier to start my editor since it will launch it with the appropriate environment variable set. I use a JSON workspace file like shown, along with this Alier config command to start the Visual Studio Code directly. So when I write ADA, my editor looks something like this. A major hurdle in learning the language is that the vocabulary can be quite unfamiliar. I'll be using the vernacular of programmers familiar with C family languages, in particular C++. If you're having trouble with finding something or understanding an ADA-related term afterward, this chart should provide a starting place for looking for what you need. The way ADA is structured is a core which I call Type Safe C, and then three categories of features you can bring in after choosing. The core language includes primitives like integers and floating point types, arrays, functions, and packages for namespacing. This also includes a standard way of defining compilation units. One set of features includes object-oriented programming, compile time generics, and constructors and destructors. I call this C++98 in Pascal because it doesn't include some features we commonly use in more modern C++ versions like move semantics and variadic templates. The second set is built-in concurrency features. You can specify tasks for concurrent computation as well as protected objects to coordinate read and write access to resources used by multiple tasks. The third set relates to modeling features. This includes ranges and constraints you can apply to types, the Spark language subset used for formal verification, and also built-in designed by contract. When I write ADA, I typically use few or no object-oriented tools, the standard library container generics, and designed by contract with pre and post conditions. I've used the concurrency features for data processing in parallel and adding interactive features. The language provides the ability to opt into features. This falls into four categories, structure to delineate compilation units, provide namespaces for functions and types, and to separate code to execute concurrently. Types to define semantics for values by constraining their usage and contents, and provide the mechanism for function overloading. Statements to define computation within our structural elements, and finally controls to provide the compiler additional direction when the other elements are insufficient, which attach to the other elements without obscuring their intent. The physical structure of how ADA programs are written is a more formalized version than that used in C or C++. Specifications in.ADS files are sort of like header files, and bodies in.ADB files are sort of like source files. There's no preprocessor or include directive, and all elements must begin with what's called the context clause. This describes all of the immediate dependencies that this compilation unit depends upon, and can only be provided at the beginning. This means a compiler can see if it has what's required for trying to proceed. The program entry does not need to be called main, and can be specified as part of the build when making executables. The predominant compilation unit is the package. A common pattern ADA syntax is the use of a pair of declarative element, as well as a corresponding body element. Package names are significant, and the usage of dots between elements indicate a package being a child of another. In this example, if a package p.r existed, it would indicate that r subchild of the parent package p. ADA separates declarations from executable code. Sections following the reserved words is or declare indicate declarations, and those following the word begin indicate executable code. Data hiding, also known as encapsulation, happens at the structural level in packages and protected types by starting a section with the reserved word private, or by placing elements in the body only. This is different from many other languages, such as Java, which describe encapsulation at the class level. All of the structural elements exhibit these similar characteristics. This even includes the declare block, which can be used within statements describing execution, to create an additional scope for variables or set up an exception block to catch exceptions. The language permits the declaration of nearly anything in one of these sets of declarations. When writing new features, it's often convenient to split out new packages, functions, tasks, and protected types during refactoring into the declaration section of the function you're working on before moving them to their permanent homes. Structural elements fall upon two axes, whether they are active or passive, and whether they describe sequential behavior or concurrency features. Subprograms, which are either functions which return a value or a procedure which does not, describe reusable sequences of steps. Taskers provide namespacing for other elements and data hiding. They act like classes and subclasses in C++ or Java with regards to the rules for encapsulation. You can hide details from the outside while allowing children of the unit access. Tasks provide concurrent execution of statements, and the body in which they're declared won't exit until they're complete unless they're dynamically allocated. AIDA provides built-in capabilities to segment available CPUs into what's called dispatching domains and also to pin tasks to specific CPUs. Protective types synchronize read and write access to their contained data and can use complex guards to do so. One-off tasks and protected types can be created by just skipping the type reserved word in their declaration. I'm only going to briefly mention types here to facilitate further discussion. In general, types aren't that important to how AIDA works because they tend to work roughly the same all the time. Their purpose is to describe the value set or modified behaviors allowed on variables of a specific type. This takes many forms. The range for a percentage could be described as 0 to 100. For a reference count, a new type, not an alias, might be used to hide the underlying type. What this does is to create a new version of an existing type. Since AIDA does not allow implicit conversion, the type system prevents mixing these types without explicit casting. This allows the type checker to catch bad semantic usage, such as assigning an integer of microseconds to one of nanoseconds. Arrays provide groups of a given number of a type. The index type can be another type and have an arbitrary range. For example, you can create simple maps by using an enum as the index type or prevent the need to remap indices into and out of an array by declaring the array index range as that value range, such as 50 to 100. Limited types prohibit a value from being copied or reassigned. In my project septum, the search's package exposes a heavyweight, uncopyable search type in an opaque way to clients. Packages can encapsulate the detail of a type by marking it as private in the interface. Clients can store and copy the type around in a similar way to a class with private elements in another language. Tagged denotes types which might require dynamic dispatch, which is also called runtime polymorphism or virtual functions. You can also create abstract base classes by adding the abstract reserved word or create interfaces for types, tagged types, limited types, or for tasks and protected objects with the appropriate keywords along with the interface keyword. In addition to providing their behavior, limited and tagged types are passed by reference implicitly. Ada is a strongly typed language, but the important takeaway is that because types are not modules, they merely facilitate the creation of other behavior and are not the main focus of the language. Access types provide reference semantics to Ada. You can't do pointer arithmetic on an access type, it just keeps indirect access to a value elsewhere. Multiple access types can be created for a single type and these are incompatible with each other. These use separate storage pools and you can even specify different pools per access type and even use sub pools for increased granularity. They allow you to dynamically allocate a type and the access all flavor acts like a generic pointer to any value of that type even on the stack. Like in C++, you can also have variations of constant accesses and accesses to constants. Values which you take an access of must be marked as aliased. There's also accessibility checks which prevent you from returning an access to a variable which might leave scope, but you can subvert this with the unchecked access attribute. Since types are not modules, there needs to be a mechanism to describe behavior operating on types. A major way that types push design is through the use of function overloading, also known as ad hoc polymorphism as a primary design mechanism. Because of this, the namespaces of visible packages and the parameters and return types of the available options determine which sub-program gets chosen to be called. How do you define a function then? In other languages, the syntax changes depending upon whether you're attaching that function to a type or not, the visibility of the function, and if and how the input parameter, either implicit or explicit, can be modified, and if that value is copied, moved, or passed by reference or pointer. Especially in C++, there are quite a few ways to pass a parameter depending upon your technical and design intent. Parameters can be provided as inputs, outputs, or both inputs and outputs. When the mode of a parameter is not indicated, it defaults to an input parameter. Parameters are const by default unless they are marked as an output parameter. Anonymous and access types can also be provided, as well as a null exclusion to indicate that an access must be valid, which is like a reference in C++. For object-oriented programming, instead of providing a base class reference or pointer, the tick class attribute of the base class is used, which indicates what's called a class-wide type. Subprograms declared in the same package as the type, which have that type as the first parameter, are known as primitive operations. These get inherited when another type is created from this one. Primitive operations may perform dynamic binding, like a virtual function, and dispatch if the value passed is of a class-wide type. This means if the compiler knows the type of an object, it will be statically bound. If the compiler overloads, wrap the operator with double quotes, but otherwise works similarly to other functions. Since functions and procedures get declared outside of types, this means there isn't much of a distinction in AIDA between what other languages call member functions or methods, static functions, or free functions. Controlled types provide RAII, or resource acquisition is initialization. Types which inherit from AIDA.finalization.controlled can overwrite three procedures, initialized, which gets called after default initialization, adjust, which gets called after the value receives an assignment, and finalize, which cleans up at the end of the scope, like a C++ destructor, and also immediately before value receives an assignment. Limited types cannot be copied or assigned, so for those you inherit from AIDA.finalization.limited control, and there is no adjust in this case. Attributes take the place of additional operators and keywords to denote special properties and operations of types and values. Initially, they're super confusing, but they're just the name of a type or value followed by an apostrophe or what AIDA calls tick. For example, instead of size of and a line of, there's the attribute tick size and tick alignment. Instead of an operator to get AIDA's equivalent of a pointer to a value, you use tick access on the variable, which also works on sub-programs, like a C function pointer. If you want the raw machine address, you can use tick address. To parse a primitive value from a string, there's a function like tick value. Primitives also provide tick min and tick max attributes to return the min or max of a pair of values. For arrays, there's tick length, tick range to provide a range to iterate over, tick first and tick last to get the first and last indices. Some attributes like size and alignment can be controlled with aspects of what's called an attribute definition clause. Aspects allow you to attach additional metadata, options, and descriptions to types and structural elements without disturbing the main syntax. Being here, you can see the line in-type enforces an invariant. Local flags is a packed array of booleans of 32 bits, which is a fancy way of saying to use a 32-bit integer as the underline type. This was done to provide a nicer interface to set binary flags and then pass this value off to term iOS. Insert here will be inlined, but also has a precondition which gets checked prior to being called, and a post-condition checked afterwards. You can see the attribute tick old here being used, which is the associated value prior to the procedure call. TC get ATTR is just a C function being imported and can be used just like a normal AIDA functioning code. AIDAs generics are different from those in other languages, since you instantiate generics at the package and function level and not at the type level. This instantiation is also explicit, meaning you can't just use an ordered map type inline with angled brackets and have it work. This was a major conceptual block for me, as I don't recall ever working with a language which handles generic instantiation this way. Instead, the entire package gets instantiated and then used like any other normal package. Also demonstrated here is the usage of named parameters, which works also with sub-programs. For generics which can be shared, this instantiation can be done once at the compilation unit level, so each package doesn't need to individually instantiate it. Now that we've gone through and overview the language, let's look back at the high level at how these pieces fit together. Let's start with the structural side. You can see how sub-programs and packages can have generic versions, but tasks and threads cannot. You can, however, put tasks and protected objects inside of generic packages. You can see types split out into their various flavors. Implementations are technically not types in Ada, but I'm grouping them in that section since they're distinct from one another. Finally, you can see the controls we have over the other elements. There's quite a few pragmas which have corresponding aspects. There's also a set of pragmas, attributes, and aspects defined as part of the Ada standard, but implementation defined ones are permitted as well. With the available time, I'm going to highlight a few interesting features of Ada. Stringes are arrays of characters and are not null-terminated. Since they are arrays, they include the same bounce checks as other arrays. If needed, you can convert to and from null-terminated C-style strings using the built-in interfaces.c.strings package. The language allows returning variable length arrays from functions, so you can build and return strings from functions. String, like other arrays, is of a static size. If you need a resizable string type, you can use Ada.strings.unbounded.unbounded string. Since strings have a known size, creating arrays of plain strings is not possible unless you pad all of the strings to the same length or dynamically allocate each string. This also applies to keys of maps for which I use the variable-sized unbounded string type. The concurrency features are a complicated subject, so let's just look at an example of tasking in action. Assume we're loading many files from disk, possibly performing processing, and then storing the result in a cache. File loader task defines what's called an entry named wake. Entries are similar to procedures, they can't return any value, and they perform what's called a rendezvous between tasks or protected objects. One task calls the entry, and then the other task or protected object must accept it. I don't use any here, but you can define guards, which are conditions which must be true for a task or protected object to accept an entry. In the inner loop, this task calls an entry on the queue of files to process. This blocks until a file can be dequeued, or times out and exits the loop and hence the task after one second. If there was an element, then it's loaded, processed, and cached. FileCache here is a protected object preventing multiple file loaders from performing writes at the same time. This loop of the task ends by incrementing another protected object monitoring progress, which is used to display a progress report for the user. Note that the FileQ, FileCache, and Progress variables are not declared in this task. Normal rules regarding scope apply, and these variables had been declared in the same function scope as this task, so they are available. Note that all of this is done with built-in features of Ada. This is a sampling of low-level controls. You can import C functions, and then call them as if they were normal Ada functions. Compiler intrinsics work the same way, as in this example from the Atomic project. Inline assembly is allowed. I use this assembly to cause the debugger to break in the appropriate failing assertion when a test would fail in my test suite. When I did the popular ray tracing in One Week in an Ada, I adjusted binary layout to make a struct appropriate to write a BMP file header. Let's look back at the original layering of the language that I mentioned, and group these features. From here, you can see there is a good base to the language in starting out, and you can incrementally add features as you need them. When I built a septum code search tool, I started on the far left with just the basic primitives that I branched out by bringing in design by contract from the modeling group. When I wanted to expand the filtering mechanism, I converted filters to tag types to be able to use runtime polymorphism and dynamic dispatch, and they're still the only class-like type in the system. Most other types are struct-like records, with many being private types, which hide their contents within their containing packages. When I needed to parallelize search, I brought in tasks and converted the file cache to a protected type. My best way to describe Ada is that it's about intent. These examples show how some of the keywords relate to their goals. You describe what you want to have happen, and then you layer on meaning as needed with the additional controls available. The separation of structural elements and types from low-level controls puts the focus on solving the problem while providing customization where you need it. And we are live for questions and answers from Paul Gerrits. Before we start with the first question, let me remind people that they can always join the live chat after the talk ends. For this reason, they have to join the appropriate room in Matrix. So first question that we had, why does Ada need a move semantics, according to you, Paul? Move semantics are very important because they allow you to transfer resources without a copy or a reference. On the surface, it looks like why do we need that? But the important thing, a later question, is what does Ada need? There's a thing in C++ called perfect forwarding, where it's a variadic template, used to something called an arg pack, where you transfer arguments through a function call to call another function as if it had the exact same parameters. And move semantics would allow that, assuming that we had variadic templates and things like that in Ada. So it also expresses the semantic intent of an expiring object, or I'm not going to use these resources anymore, I'm handing an object off as a parameter. I think it's something that Ada actually definitely needs. But you know that the limited type in Ada is returned by, not by copy, but by address. In-out parameters are also passed by address. Tag types are also passed by address all the time. So move semantics already exists in Ada, simply not implemented through variadic templates. Simply not implemented through limited types and return blocks. Yes, it's a difference. It would be like adding a moved reserved word in addition to in and out. It's semantically different, is the point. Because in, it's transferring the resources. Just formalizes it, when you move an object, you cannot access that anymore. The borrower checker will get upset at you and say that the value has been moved out. Because it's a semantic definition of essentially destroying state. You use it for vectors, like copying resources. And there is a move function in Ada containers, in some of the Ada containers, but it's a semantic way of building it into the language. Right. Well, maybe that's something to look into. I hear that Spark has imported the borrower checker from that word. At least something similar to the borrower checker, first, into Spark. So now at compile time, there are some checks to verify the ownership of objects passed by address. So it's not in Ada, indeed. It's an extension to Ada. It's Spark. But yeah, maybe that might be something to look into. Second question, even though I read your website, what were the things you found really difficult for coming from the C++ world? What were the roadblocks you found? The biggest ones were attributes. I read Barnes's book and it was the primary source that I used. And a lot of the tutorials just assume that you know what attributes are. And it's like, I would expect there to be a keyword or something for that. It's not really a good explanation other than it's something that might, it's built in and built onto a value or an underwrite type. And that's not really explained very well in a lot of tutorials. That was a big trip up. And then generics. Normally instantiations are implicit. Like in C++ and Rust, you just use the type. The whole concept of having to instantiate an entire package, yes, it's what C++ does behind the scenes, but most of the time you don't concern yourself with it as much. But that was a huge one. But everything else, in many respects, Ada builds on the same conceptual foundations. Like I call it C++98 and Pascal because I honestly think someone with C++ could get productive in Ada in less than a month. Because there's a lot of conceptual similarity, even though it's a Pascal-based language and not a C-based language. A lot of the stuff is very compilation model system like the ADS, ADB files, very familiar with that. A lot of the stuff just kind of flows directly from C++ understanding. Does that make sense? Yeah. I didn't really catch what you said about the one month. You mean it takes at least a month to get become fluent in Ada? I think I was productive in Ada in less than a month. I was reading standard library code in less than a month. A lot of languages that you'd not normally at that point that fast. That matches my experience too. Because I was once an outsider too. I taught myself Ada, yeah, less than a month is sufficient. The important thing is that some people say we can't use Ada because there's no programmers or we have trouble finding programmers and I think that something that people don't understand is that you could hire C++ programmers and convert them into Ada programmers relatively quickly. Yeah. Awesome. If you were bound to Ada for some reason. That matches our experience too. Have you managed to convert anyone to Ada from C or C++? Conversion isn't really my goal. My goal was primarily to figure out the language, so to speak. Part of the reason I learned it was there wasn't a lot of information and I don't know anyone in person that actually uses Ada at all. This conference is my first experience actually talking to people who use the language outside of the text forms and things. My intent was to understand the language and all the material I've written has tried to be from the objective point of view of laying out the language because a lot of people don't understand how it fits together. I think there's a lot of conceptual ideas that would improve other languages if they would adopt them. Especially from a university point of view. Ada has a lot of concepts that are used and more difficult to learn languages like C++ that I think especially in a university setting it would be easier to get someone into C++ via Ada rather than just dropping them in there from Python or Java. Does that make sense? It does. It does a lot. You're preaching to the choir in fact. We are all in agreements. Maybe last question. What would you like that Ada did differently? I would like to see the, Nat already has a switch for it, the function call where you can treat basically the controlling parameter. You can always use the dot, the notation, the A dot B notation. I think that would be good because C++ is trying to move to that and I think REST is as well. That would be interesting to see because it would unify the ability to call everything the same way. Sometimes you create tag types just so you can get the A dot B notation. The other thing I would bring in would be I would like to see how control types, I would like the instantiation from aggregates. There's some non-obvious things about how those three initialize and adjust get called. It took a lot of experimenting to get that mental model correct in my head, longer than I would like. We'll know that initialization and finalization are difficult topics in any language. Ada makes them explicit so you have to really think about them. In other languages you might simply forget about them because they are implicit and you don't see them. But they exist nevertheless. I think that's all we have.
Ada can be difficult to approach due to using a different vernacular to most other languages, and also having many unfamiliar structures and ways of doing things. This is an overview of the Ada language by someone who is new to the language, for programmers from other languages, kept as neutral and objective as possible. See how syntax falls into four categories and the language allows you to opt into features. Learn how Ada fits together at a high level, with an emphasis on the ways Ada differs, using code samples from open source Ada projects. About a year ago, I still thought Ada looked like COBOL. Since then, I've spent a long time trying to understand the language as a whole, since it's different from many others that I've used. This talk isn't to sell you on the language, but instead to provide a general overview of how the language works. This will complement Jean-Pierre Rosen's "Introduction to Ada for Beginning and Experienced Programmers" talk. If you haven't watched his talk yet, I recommend you do so after this one. If you have watched his talk, this should provide a different perspective to help you understand Ada better.
10.5446/57030 (DOI)
Hello, I'm going to talk about the project I've been working on for the past six months with my colleague Pierre-Alexandre Bazin about proving the correctness of the Gnatt Light Runtime Library. So what is this Gnatt Runtime Library? That's what we called previously at the decor the zero footprint ZFP or Revenscar SFP or SERT Runtime. So now it's divided between the light runtime and a more encompassing embedded runtime. And this light runtime is targeted that's embedded platforms. So we have 77 platforms supported right now. Most of them are bare metal with a variety of chips ARM, Leon, PowerPC, RISP5, X86 and X86 64 bits. And sometimes with an S on top, so like PyCoS or X-Wars. And the units of that runtime are ready for certification. So a subset of these have been already certified for use in various projects in avionics, in space, in railway, in automotive, subject to the suitable certification documents. So like DO178 for avionics, ECSS for space, EN51284 railway and ISO26262 for automotive. So you can say that the units in that runtime are this subject to a high degree of scrutiny with proper specifications put in place and test suites and reviews etc. So you can build this runtime based on the project that you have available truly on GitHub in BB runtimes where it takes the sources from the DCP3 and the net sources subdirector. So just to give you a tour quickly of these units in the net light runtime, if I take the X86 64 bits version, because of course for each version you have different number of units. And here it has 182 spec files. And I'm only counting the spec files here because some of them are just spec files, but some of them are bodies and some of them are instantiations of generics. So they have spec plus bodies, but declared just as a spec file. So between all of these 182 files, you have of course support for the ADAS standard library. So that's the A-iPhone something files and the I-iPhone something files for interfaces. So you have here a carton string handling, you have some Nurex library, you have support for assertions and exceptions, but in this runtime there's no propagation of exceptions, it's only local or calling the last-hand handler. And there's interfaces with C. You have also a little bit of the Gnats user library, so we have a few G-something files, it's mostly for I-O. And you have many files that deal with the support for features of the language, so the Gnat runtime library, so you have 104 T as dash something files. And so here you have support for various attributes like tick image, tick value, tick width, attributes of floating point numbers. You have support for I-fed collaborations, especially on fixed points and floats and explanations, including contagious and some Nurex, and support for tasking. So all of that. Now I'm going to talk about proof. So proof using the Spark tool. It's a formal verification tool doing two kinds of analysis, one called flow analysis and one called proof. And proof is the most involved whether it asks provers about the validity of some formulas which entail the correction of the code. And so here you can see very briefly a depiction of the architecture of the Spark tool, where it starts from the complete left inside an editor, where you can see the codes. And at the bottom you can see here some Spark codes say assigning a value 42 in index one of array A. And through some codes, part of it which is shared with the compiler, what I called the Gnat here, we transform that code into a YML from intermediate form. So that's this code in the middle which represents the assignment on the left. And which is represented by this question mark in the middle, which is the symbol of the Y3 platform. And this platform dedicated to proof programs then generates formulas. So you can see an example of a bit of a formula in this SMT lip syntax on the bottom right. And these formulas are dispatched to various provers, in particular automatic provers like AlphaGo, CVC4 and D3, which are all included in the Spark technology. Okay. So in Spark, we emphasize the fact that proof is not an all or nothing endeavor. So you can start at some bottom level, which we call the stone level, where you have only the guarantees of having a better, safer subset of the language. And then you can go up the scale here with the bronze level for checking constraints of data and information flow and making sure that everything is initialized. Silver level is about making sure that there are no runtime errors, particularly no CW in the codes. At the goal level, you can start proving key integrity properties, whether it's there really to safety or security. And at the platinum level, you can prove fully functionally the code. And so typically, the platinum level is applied to a much smaller subset of the codes than what's, than where you can apply lower levels. And the main target, usually the silver level, so absence of contenders. But here in this project, we're going to target this platinum level, so fully functionally describing the behavior and proving that the code complies with this specification. So first, a little motivating example for this project. And so we're going back to 2012, when we were adding support for big integers in that, so mathematical integers inside the compiler, was motivated by a feature in Spark to allow intermediate computations without overflows. So we do them using full mathematical possibilities to ignore the possibility of overflows in some proof. And implementation was done by the late great developer, Robert DeWor, so founder of Hedycorp, and used to always be for this multi precision division. So we have integers in a number of machine integers. So you have array of machine integers to represent large integers. And this algorithm is described in the art of computer programming, volume two. So we use the second edition from 1981. And this will have an port Slater, that's why I'm mentioning it here. At the time, so the reviewer told him that in the code that he had written, there was a possible overflow in this large piece of code, this large expression. But Robert, knowing that he had implemented the code from Knuse's book correctly, and assuming that the book was correct, was not convinced by this possibility of an overflow, and asked for an actual issue. So the reviewer went back to the codes and finally came up with this example. So try this long multiplication division, which corresponds to activating this expression leading to an overflow. And indeed, Robert recognized that the true results was different from the computation in the codes given by Knuse. So what happened? Well, even the best like Knuse and Robert DeWorra can get it wrong, especially when it comes to these very low level details, repeat to possible runtime errors, and especially when it comes to overflows. And regarding this specific computation, in fact, the bug had been already fixed in an ereta of volume two in 1995. So that's this part here, this test, which needs to be done to prevent the possible overflow. But that's not the end of it. So 10 years later, it was this test was fixed again. So it was not actually an equality that should be on here, but a greater or equal test. So hopefully after that, we have a correct description of the algorithm that can be implemented without overflows. So that's what we did. But obviously, algorithm gets implemented in many places. And that was the case in the compiler. So this algorithm was used in two other units. In this unit, you in pitch for arbitrary present computation during the combination. And in this other unit system that's are it 64 to support six point are a fake because six points are represented internally by the compiler integers with some scale. And so you need this kind of competitions. But obviously not to compete, not to implementations are alike, there are different constraints in the that has that are represented. And so on the one side, we fixed the computation in you in pitch, although we couldn't find an obvious bug. But the computation was was the one that could be fixed. In the system that's are it 64, the test was done differently. And so we didn't find a bug. So didn't apply the fix and hope that it was okay. Now fast forward to 2019. Here we were, as we usually do at the decor, doing a runtime certification for space. And one of the external reviewers looking at this unit for six point support and in particular, at algorithm D implemented in scale device made the remark that we should increase common frequency to a better implement put to better understand the working in the working of the algorithm. So that's led to a new internal review. And during this internal review, we detected the two possible silent overflows in another related function, so double divide, and the missing exception. So some case which should raise an exception, but here was not doing it in the scale device. So that's not good. That was not so bad in fact, because as I said, the overflows were were silence. So we're doing the right thing in the end. And the missing exception was just on incorrect input. Still a colleague from the specification team challenged us to start team to prove the unit. So we set set on doing that. And one week of work later, we got all the algorithm except this scale divide implementing algorithm D, which was more complex proof. So we tried to have a discussion with the specification team, but priorities to con and the while everyone moved on. Until recently, so last year, during a summer internship, a very good intern Pierre-Alexandre Bazin updated the proof that we had done two years before and finally proved this scale divide implementation. And so now it has moved. So it's in this new file s Arizu because it's a generic implementation now for 64 and 128 bits. So you have this big contracts at first, but which is quite readable. So you can look at the past condition of scale divide. Indeed, it says that if you go to big integers, that's this conversion to big for all the values involved here. The result are is really the the remnants, the result of the remnants of the multiplication of x times y divided by it. And for Q, if you ignore rounding, which is just a slightly more complex case, the quotient Q is really the quotient of x time y in mathematics divided by it. And the application is just to prevent an overflow during this competition. So great. That was proved. We were very happy to learn that finally, our implementation that had been certified was indeed the probably correct. So we say, well, let's go from there and prove everything. But the runtime is not exactly in Spark. There are a lot of uses of untyped memory for many good reasons to handle the cigarette stack for a rare comparisons for things that are not necessarily even aligned for also ports to get to the tags of objects dynamic type for binding to C string. So many reasons to use in data addresses, so type address and doing uncheck conversions between address and the pointer types. And this is something that is not at all supported by the ownership system that we have in Spark, which is a fully typed. Plus, not everything is not is provable. In these units, there are a lot of support for low level floating point operations for some language attributes in Ada, regaining floats. There are also low level support for numerics like trigonometry. There's things that do double arithmetic for double using floats. So two floats for doing it all. So all that depends on the presentation of floats. In particular, it does a lot of overlays between various types. It uses the representation of NAN, not the number and infinities, which are not in Spark. And it does complex floating point reasoning. So all these things, both at the model level for NAN and infinities and overlays, and at the level of reasoning about this floating point computation is not supported by Provers. So we cannot expect to both specify these functionalities in great detail and prove them automatically in Spark. So we settled for something less ambitious. Let's prove everything that fits the Spark subsets and can be both expressed and proved. So let's prove all the Spark things. So first example of that is the interface to C. So let's take the unit interface C. Here we had to express things in addition to what was in the unit, just to be able to specify some of the functionalities. So we add, for example, a ghost function C length ghost, which adds what it means to have the length of a C string, which ends at the null character. And then this ghost function can be used in contact. And it's self-implemented and proved. So the proof makes heavy use of advanced Spark features like loop invariance for loops to summarize the state of the current iteration of the loop or relaxed initialization for a thing that are locally uninitialized but are progressively initialized inside the loop. But all of these uses are quite simple and I'm going to show you. So here is the this C length ghost function. And you can see that it's implemented here with a simple loop to get to the point where you have the null character and you have a loop invariance as you would expect to prove in Spark when you have a loop to say that so far no character was found to be null. And this Clex function can be used then to specify other functions like this function 2, 8, r. And then you can see that this function 2, 8, r which takes this car array and returns a string. Because depending on the value of this other parameter trim null, it's going to do something different up to the null character or not. And again, in this 2, 8, r, you will see a loop invariance for loops to summarize the the work done so far. And here something else is that when we declare a variable here without initializing it in the array, we can declare it with relaxed initialization. And then inside the loop, we need to specify up to which index this array is initialized. And so proof is going to take care of approving initialization of this variable. If it holds, then it's done by Flona dc in a much more the cursor way and cannot be applied here because the array is progressively initialized. Okay, now let's move on to fixed point support. So returning to this scale divide and other functionalities. So it's implemented here in this various units, array 32, array 64, array double, so which is the generic instantiated in array 64, for example. And here we took the commands in the code which were quite detailed and translated them into Spark contracts. So you can see an example with this adds with all flow check. So we have a function here in in 64 range, which says when an argument, which is a big integer, so arbitrary integer not bounded is within the range of a sign in 64. And so this is a ghost function again, so something that's only for specification and proof. And inside the function add with all flow check. Now we can add a precondition that says for this function to operate properly, it needs to take arguments, which won't have a flow when you add them. So when you do the this summation in big integers, then it fits and the sign in integer 64, then the results will indeed be the result of the addition. And so let's look at that. So in the integer 64 bits, spec it's here, and the implementation here just renames the the generic. So let's look at the generic spec and implementation. So that's just what we describe in terms of spec. And the implementation here uses a number of what we call lemmas. So things, procedures that are ghosts, so only here for for proof, which prove a given property, the best condition from another property given in precondition. So that's a way to isolate a proof so that the automatic provers can do it much more easily. So this proof can be quite involved, as you can see here, with a number of cases being described, assertions by the provers, even some lemmas. So these lemmas are again, the same that as what I just showed, except they are marginal. So I've extracted them here in a different part of the file. And in the ends, the code is not obstructed by the use of all these codes, we're just calling the lemmas where there are needed to prove the post condition in this case. So that's a rather simple case, in fact. So now we turn to the dreaded scale divides. This is much more complex because here we're mixing in sign integer arithmetic and sign some modular integer arithmetic and arbitrary bounded integer arithmetic with these big integers. And you can see that you have quite big lemmas. And these are also proved with a more complex reasoning. And in the end, the implementation as to call this the mass and the right places in the right order for the proof to go through automatically. So we have also proved the character and string handling. So for character, it's in a data character that handling and I've example just at the bottom of the slide and for string, that the body string, the fixed string and maps plus supporting units in that that implement part of these functionalities. And here the specification comes directly from the data reference manual. So the description was translated into spot contracts. A very simple example of that is this function is controlled in characters handling. So it's specified as true if item is a control character. And the control character is defined as the character whose position is one of the ranges 0, 31 or 127, 159. And that's exactly what you have in the post condition here. The result of this function call is whether the position of the item argument is in these ranges. So specification can be a bit more complex for more involved functions. So for example, here in a last string fixed, you can see this function index, which is going to look for position of given pattern in the source string. And here, well, the there are many cases described in the specification, which turn into this large contract cases. So we're using a special form of contracts, which gives depending on various cases here before the row, the result that is expected here. And so here, this unit is in fact based on another implementation in another unit. And so this other implementation has a similar contract. And its body is verified against this contract. And here it calls other variants of index, which are themselves proved against their specification. Let's move now to explanation. So there are various implementation of explanation for sign integers, modern integers, and depending on the value of the module. So for example, here, you have the binomial modular is the simplest. So it specifies at the top here as just well, the expansion left at the exponent right. The sign version is a bit more complex, because we need to make sure that there's no overflow. So it has a precondition that uses this computation in big integers. So we doing the explanation big integers and checking that the result fits in the range. And even more complex case is a modular explanation with a non binary modulus. So when the modulus is not suitable or something, but can be 42. And here, the way to specify it is to say that the results here, X modular tick results, when considered as a beginner is really the operation done here in big integer modular. The value of the modules. So we have to do that because just doing it in modular machine integers would do a double modular and will be incorrect. So if we look now at the code. So as I said, the simplest one is this modular explanation and here it's a simple loop with as you would expect loop invariance and a few sessions. Now, if you look at the sign version, well, it's slightly more complex. So there are a number of them as really it's really thing how the operation works in big integers, especially because exponential is more difficult to reason about with the automatic tools. And so the implementation also has more ghost codes to drive the proof and calls to these lemmas to help programs. And the most complex is this exponentiation of a module type with a non binary modules, because here we have sign integers and sign integers and exponentiation with big integers. And so we have many more lemmas and we have much more effort to drive the course towards the net proof. I would like to finish with the support for a tick image and tick value. So tick image is a way to print values of various types and tick values does the converse. So it takes a string and returns the value. And the goal here is to be able to show that the image and value function in runtime are reverse functions. So give sufficiently precise post condition for value for the value function for type. So that's the post condition of the corresponding image can state value applied to the result of image of the gives back the. So what that's what we do here in this image Boolean in this code snippet, so you can see here at the bottom that indeed when we could value when we call value Boolean on the results s string from one to be because P and s are the result of this procedure. It gives back the input of the major Boolean. And we are in the process of doing that for all of the other type. So if we look now just at this, I wanted to show the specification that is needed for this value for this value for unsigned integers. So it's here. Yeah, scan and sign is one of the functions used here. And you can see that it has quite complex post condition because we need to describe what happens with the blank characters. And what happens with sign and the various ways to specify the value. And so it depends on all these other ghost functions that describe how the base and value pass the base or describe and the experience, etc. So that's all for this project. So the current status is that we managed to find a few mistakes and fix them just possible overflow and run checks. That was the good news. An example of that is the simple tests. So if you compile these codes, well, you will get an error. But not necessarily the one you expect. So here we are trying to get a Boolean value from the string which just contains a blank character. So this is incorrect and what you would expect and you will get in the next version is this last line. A constraint error is raised because it's a bad input for the attribute value here. Well, if you use the community with 20 20 version of previous, you will get the segmentation port in fact, because underneath there's a level flow in the runtime support code here. And you end up reading and writing much past the string here. And this flow happens as you would expect because the string of one character is located with the index naturalty class. So at the extreme value of possible indices here. Saisetation is slightly better with the latest version of net community 2021. Because here, instead of the segmentation port, you get the constraint error. But not the one you would expect. In fact, in this version of net community, the runtime was compiled with runtime checks on, but not overflows. So the overflow is silent. But then you get a runtime check failure when we're trying to access the array s beyond the start. So you get this index check failed. So what we are with that, we have a partial proof of the net lifetime library. So for 35 units. So that's not nothing, but that's far from the total number of units here. So as I said, using the exit to 64 version around one and run 80 units. The daily proof takes one hour and 30 minutes on the quite big server. So 63 cores Linux server. And we do all the configurations that I mentioned here. So for ARM or exit the six Linux with Vakesforks or their metal. In total, we added many specifications inside these units. So almost 400 preconditions, 500 plus conditions. The proof also requires more than just the specifications to drive the prover towards proof. So what we call ghost code, which is not executed in the end, in particular, because inside these units, we specify that ghost code should be ignored during the final compilation. So we have to add almost 150 loop invariance, almost 400 assertions, and almost 300 ghost entities, so types, variables, and many of these are actually lemas. So 150 lemas, that's what I showed you, ways to prove a given property in a smaller context. So that's automatic provers can do it automatically. And that leaves us with two questions for the future. Can this effort benefit the future specifications of the runtime by adding more guarantees to what we can do today with testing only? And also what can we do beyond what Spark supports currently? Can we do something in particular for units that use these addresses and condition to pointers that currently are not supported? Well, thank you for your attention, and I'm waiting for your questions. Then you consider that having 60 plus lines of contracts for five lines of... Okay, I'll continue since you disconnected. So the question was, do you consider that having 60 plus lines of contract for a declaration in that which is initially five lines makes the specification unreadable? And so initially, I understood this question wrong. I thought you was talking about five lines body. And indeed, there should be a relationship between the complexity of the function that you are trying to specify and the specification. And then depends how far you want to go in terms of specification. So here we went for the platinum level, wanted to fully specify functionally this function. And so the fact that the function declaration in Ada is five lines is a bit irrelevant. It doesn't say really the complexity of the function that you're trying to implement. You could have underneath the wall, the wall, a system that you're implementing or just a really small leaf function. And so that's where I think it matters. Do you learn something by specifying this larger contract like this 60 lines of contract? And so here in this case, it was pointing at the index function, which I pointed that in my talk for string manipulation. And the contract is actually quite readable. So it specify exactly what these cases are in the index function. And if you don't write this as a form of contract, well, you have English prose that you will find in the other reference manual, which is itself usually not self-contained. So you have various paragraph of the reference manual, which and end up defining the behavior of the function. So I think in the end, so there are two answers. The contract should matter to you. So it should say something interesting, otherwise, maybe don't do that. It relates also to the level at which you want to prove the code. And then in terms of use of IDEs, I hope that in the future, we have ideas which allow to hide these contractual codes like we hide the comments. And so that's something we've discussed, for example, for our future support of Spark in various IDs, whether it's the visual code or that's to you. And thank you. Another question. A previous speaker made some comments regarding Provers having to work together to prove everything. How does it work in your experience? Did you use all three? One? So yes, Spark is tailored to benefit the most from the combination of Provers, so that you have the least to do in terms of helping the Prover. If you just limit yourself to one Prover, it's just you're cutting yourself from two-thirds of the automation. And so that's just a pain. And in fact, what I did was to use the four Provers that we have now. So there's one called Colibri, which is not enabled by default because we're still working with the developers for having a better integration. But it's inside the technology, you can enable it. So there's just a switch, Prover All to have all four Provers, and here it's useful. So yes, we use constantly, as soon as you have more complex proof that go beyond basic type safety. So even the proof that the Rothschildman showed about more complex predicates and types, here you needed the three Provers, and here I needed even the four Provers that were not. Okay, thank you. Stefan asks, how do you debug, identify the issues when you write a complex contract and it fails during execution? Yeah, so here it didn't really apply because the way we use contracts for the runtime library is that we don't want these contracts to be executed. So we never execute them. We have a pragma assertion policy that we use all over the runtime to say that these contracts will never be executed. So these contracts are really here only for proof. And so if you want to make sure that the preconditions in particular are correct, you have to read them. And in general, when you develop contracts, what you should be careful about is that the contracts at the border with things that are not proofed, typically the preconditions for library, for example, when you call it from elsewhere, if the code that uses this library is not proved, the precondition could fail. So either it's a good thing because then you prevent getting into your API with the wrong context, or you don't want it to happen, and then you have to test enough. And so yeah, how do you identify that precondition, for example, is wrong? Well, here, the failure in the test, for example, would give you a scenario which you can debug. And so in contracts in Ada or Spark, or just like code, you can debug and test them. Okay, thank you. We have about a minute. Fredrik asks, did the proofs require a lot of gist functions? Yes, so there was quite a lot of gist functions, like I mentioned, but even more gist lemas, so things that you need to prove in isolation, because then the context is smaller for proof. And I also wanted to address one question, which is, is it worth it? So that was the same question as for Rod. For us here, yes, at least in terms of making sure we are sufficiently confident in this runtime to be seen if we can exploit that more generally in certification. So for here, I cannot say right now, for example, that we will benefit in certification from this proof compared to the stretching approach of specification and testing. Okay, so we have 10 seconds. There are questions that have not been answered. The room will now open and everybody is more than welcome to join. Thank you, Janik. Thank you.
As a programming language, Ada offers a number of features that require runtime support, e.g. exception propagation or concurrency (tasks, protected objects). The GNAT compiler implements this support in its runtime library, which comes in a number of different flavors, with more or less capability. The GNAT light runtime library is a version of the runtime library targeted at embedded platforms and certification, with an Operating System or without it (baremetal). It contains around 180 units focused mostly on I/O, numerics, text manipulation, memory operations. Variants of the GNAT light runtime library have been certified for use at the highest levels of criticality in several industrial domains: avionics (DO-178), space (ECSS-E-ST40C), railway (EN 50128), automotive (ISO-26262). Details vary across certification regimes, but the common approach to certification used today is based on written requirements traced to corresponding tests, supported by test coverage analysis. Despite this strict certification process, some bugs were found in the past in the code. An ongoing project at AdaCore is applying formal proof with SPARK to the light runtime units, in order to prove their correctness: that the code is free of runtime errors, and that it satisfies its functional specifications. So far, 30 units (out of 180) have been proved, and a few bugs fixed along the way (including a security vulnerability). In this talk, I will describe the approach followed, what was achieved, and what we expect to achieve.
10.5446/57031 (DOI)
So, hello and welcome to my presentation on exporting ADA software to Python and Julia. So in essence, I would like to tell my experiences with the GPR build, the project manager of the GRU-ADA compiler to export ADA software. So the motivation is that I have a large library of software that I would like to export to a more commonly used current environment. I will say something about the interface development. So the interface development that can be divided in two types. The easiest type is when the ADA software remains in complete control, but actually the most useful interfaces also give control to the other software that is going to apply. So at the very end of this talk, I'm going to say something very briefly about my application. The most important lesson here is that don't wait until you have close to a million lines of software to start using a project manager. And I have prepared some dedicated GitHub repository that contains standalone code. Okay, so when one exports ADA software, you want to make the build process as simple as possible because it's highly likely that the client, which I will not call the other software, might not be quite so familiar with your software. The other purpose is that you would like to export all the functionality that you have as an ADA programmer to programmers in other languages. So my application domain is mathematical software and I should definitely mention SageMath, which is a very large open source mathematical software system. And my activities are actually motivated mainly by SageMath. SageMath has its notebook as its interface, the Jupyter notebook, where Jupyter stands for Julia, Python R and many others. So if you have ever used a computer algebra package such as Math America or Maple, you may be familiar with a notebook interface. The main importance about Jupyter is that it is not tied to any particular system or programming language. Now my talk is mainly intended for programmers. So there are also other interfaces to my software, which only require the executable. So here one expects that one is still able to build and to compile. So what is the main principle in the way I would like to propose to do interface design is that C is kind of the least common multiple of programming languages. So if you have a good C interface, then the other languages will actually, it will not be that difficult to make interfaces to your software because both Python and Julia interface actually well with C code. So and the main point of all this is that the build process can be made automatic. So what we actually also want is that this platform independent. When we talk about a shared object file, it's a.so file on Linux, a.ll file on Windows and a.dilip file on Mac. So the single build script will actually work on all three platforms. So this is the main advantage over makefiles. So makefiles tend to be system and platform dependent. Okay so GPR built the package manager supports actually ADA, CNC plus plus as languages. Okay so I could immediately hop to my application, but I think it's probably better that we look at a toy example where we think of swapping the characters in strings. If you can work with strings and here if you see the low level representation of strings, then you can actually also work with arrays of numerical data. On the slide I also want to distinguish between the two types of interfaces where you have an interface where main is in control, so there's still an ADA main that regulates the reading, the writing and of course also the processing. If a C program has control there is an interface procedure where the C program calls that interface procedure whenever needed, whenever there is an action for which it requires the interface package. Here's how this can go. In ADA we have a package swap that initializes the data, so it initializes the package with the given string, then it does it and then the string is returned. Now the interface routine call swap will be called by C. So it's very good to have actually verbose option whenever you design something like this so you can track what is actually really happening. But the important thing is also that all the arguments are actually typed. When I was presenting my interface once to another programmer, why don't you void everything? Why don't you pass a pointer to void? So that would be the typical C way of thinking, but I like typing. So you can actually still have the typed control. So where you pass the identification number of what kind of action you want to do, the data you want the action to be happening upon. Here it is just a pointer to sequence of integers, the size of the data. So you can still control, keep the strong typing. Here is the C interface. So of course testing, testing, testing is important. So we have a standalone C interface that actually tests the jobs. So job number zero is initializing, job number one is doing it, and job number two is retrieving the swapped word. The data in it and data final are optional if later on if the interface is automatically initialized. So this is the main program in C. Here is now building the executable and actually three executables. So there is the hello world, there is the main, and then there is the C test program. So this works on all three programs. So this is a program to build executables. Then there are library projects. So the code for the library project is a little bit more involved. One has to define the interface. So instead of the main programs, one has now actually the call swap sitting in there. There are the switches and actually the minus LADA is something for which I couldn't find help in the user's guide of GPR build for which I actually needed the internet for stack overflow suggestion. Okay, this is demo lib. Perhaps the demo, perhaps I can do actually this GPR build demo lib GPR. So that will compile everything. And now I can actually go to my Julia folder and execute the Julia code called AdaSwap jobs. And that will execute the corresponding swap, but it's actually now the Julia code commanding the action. So let me very briefly indicate here how the Julia code works. It looks ugly. I admit that. But here is the key point. I executed this on a Windows computer. The same code, the same Julia code actually works on Linux, looks on Mac as well, Mac Intel. So Julia in a way has a very interesting CC call feature that allows to call functions in a shared object. So here the Ada call swap. So I will not drag you through the syntax, but it's actually passing a word as a sequence of integers. So you see it printed at the demonstration there. OK, Python is actually a little bit more complicated. So you have to do some C programmer C++ programming where you do the extension module, where one defines the extension module. So you can then import what I help called here the lib demo. The main point is that this can also be compiled without a make file. But then actually you do Python setup.py build. And that actually will compile the extension object. Critically here is that you give the extra objects, the lib demo that is compiled with the GPR build. And I must confess that I could only get this to work for now on a Linux computer. So there is still some issues here. But the main point and the main result of this presentation is that you can export your Ada software without having to use make files, both to Python and both to Julia. OK, so now two more minutes about the application. So it's all polynomial systems. So written actually before the time of GPR build. So I'm a latecomer to GPR build. So the main lesson learned now is that if you have multiple targets and if you have multiple test procedures, and certainly if you link also other software in this. So here my software contains also a substantial C++ package. There is a Python interface which I still have to extend to Python and for Windows. That's also a work in progress here. But the main result is that now with the CC call there is a lib phcpack shared object also built with GPR build that can be used to a much more efficient interface to the Ada software from within Julia. Then just calling an executable which we developed last summer. OK, last slide. Some pointers. So there is the phcpack source code distribution where you can see for a very large software system with mixed language components in there how the lib phcpack is defined and is built. If you are new to GPR build, then I would recommend that you look into the toy example that actually contains all the demo code for this talk. So thank you for your interest in this work. So we are in the questions and answers time. Thank you Jan for your presentation. We have already a few questions. The first one or the most supported one is how did the idea of writing mathematical software in Ada got started? So starting on the software started 30 years ago. So in that time and still it is writing large software packages. Ada seems to be a very good choice. Is phcpack a large software package? Yes, so I get close to a million lines of software. So actually my idea for bundling everything was that in academia you write papers. And if you are on your third paper you may want to wonder whether you can use your techniques from your first paper. So it was actually for me the main goal of phcpack was to have all those algorithms that I developed have them still available say 5, 10, 20 years later. That's the main point. Okay, we have another question. Wasn't including gpr built in your workflow too painful? It was painful but then also because of the design mistakes that I made. So one of the most common design mistakes if you are integrating let's say C++ software and Ada software together would be not to think about naming conventions. If you build a library for example you cannot have two different gcd greatest common divisor functions. So yes it was painful but I was confronted with all my design mistakes of the past. So in a way the software is now a lot cleaner. Okay, thank you, another question reads what do users of phcpack use? Did you have Python interfaces or Ada directly? I would say neither. So most of my users directly use the executable that is available from my website. So the fact that executables are there are fine. So there are Python users. So Julia is still under development and is still too new to use. But the Python interface is used and also the purpose of this talk was also to make the build process of the Python interface a lot more convenient. So there have been sporadic users of the Python interface. But the main users actually are running the executable code and might not be even aware that it is written in Ada. So you would say that the vast majority of users just use the Ada binary directly? Yes, they use the Ada binary. That's it. Okay, we have another top question. Do you intend to continue with Ada for further development of phcpack or are you considering other languages? This comes from the Ukrainian. Well, I like the term language agnostic computing. So users of phcpack are not tied to one particular language. But I'm still building or I'm still putting in original algorithms directly coded into Ada. So the Julia and the Python is mainly to make my users who are often mathematicians who are not programmers, who are scientists, they can use whatever interface they like. Okay, another question from Maxime. Is it hard to exchange real unicode strings instead of integers? Oh, yeah. So in my world, all the data are actually numeric. I'm using strings as kind of the universal type of serialization of the object. So if your software can interface with strings, then they can interface with any object almost. Yeah, so I think that strings, so the data we work with are either polynomial representations. So like you would see in a computer algebra package or numerical vectors. Okay. Another question is, why did you go with Ada over Fortran? Oh, at the time when I started, Fortran was quite limited. So we are talking 30 years ago. So I think that back then Fortran 90 was coming up. But in terms of expressiveness, in terms of flexibility, Ada was definitely and I think that still, I think in terms of popularity, Ada and Fortran are a bit on par. But Fortran definitely is more niche. So I think in scientific computing, this probably the standard. But Ada has a lot more advantages as far as software engineering goes. Okay. Another question from Fred Paraca. Do your users know that everything inside is Ada and do they care? Actually they most likely do not know. So what they care is that they can get a good answer out of this. So and also the most advanced, the most feature that they like is a black box solver. So they often do not even care about the algorithms that are in there. Okay. I think there are no more questions that have not been asked. Oh, yes, a new one. I presume you use PHC back in your teaching. Do you use Ada in that context as well? Yes. So in a way, this talk was kind of meant for future generations of students and current generations who will be looking into more into Ada. So I do have lectures on high level parallel programming and Ada fits very well in there. Well, now that you mentioned parallel programming, I suppose you mean tasks, not the new parallel keyword that's coming up. No, I mean tasks, yes. Okay. And from my side, I would like to thank you a lot for this presentation. I'm the creator of Ada Scheme, Interrupt. And using GPR build was the biggest struggle that I have not created in the actual code but dealing with GPR build. So I really thank you for this presentation. And another question, maybe since you are dealing with mathematical software and I suppose it does a lot of computations, is the performance good? Yes. So the performance is okay. So one has to do often some specific tweaks if you really want the closest performance. But performance is indeed quite good. Yes. Okay. And one last question, we have less than 50 seconds. The WikBrenta asks, what interfaces besides Python and Julia do your users use? I think these are all the MATLAB interface. MATLAB is still alive and kicking. So the MATLAB interface is being used. And does that also get generated from the GPR library? No. So the MATLAB interface is going through the binary. Okay. Well, we are coming to the end of the Q&A. Thank you, Jan, for your presentation. If the viewers have any more questions, this room will now be opened in a few seconds. Be patient and thank you. You're welcome. People can come to собственно and we will be keeping ate away from the other ones. Okay. Good point. Okay. All night long? Okay? All night long. Okay. Let me leave the screen for a third of his thoughts.
The objective is to demonstrate the making of Ada software available to Python and Julia programmers using GPRbuild. GPRbuild is the project manager of the GNAT toolchain. This talk will first present a self-contained small example to illustrate the making of shared object files from Ada software, so the software can be used in Python and Julia. The second part of the talk concerns the application to PHCpack, a free and open source software package to solve polynomial systems by homotopy continuation methods, written mainly in Ada, and available at github.
10.5446/57033 (DOI)
Ako morasteše monitor palace científici, da kama se iz чувствeta u vidilju? Daj člo…. Mez Zvijada raja Zv FMTV nekMAN doなくniti jou za izgleda chatno izgledaj聽 heavenly vz Installatori na pos Xavier Drmastim Unic其hin pristava je zaživala, kde je zaživala str Elementdin, pa investigators je ci začivala, kjer sem je to oscišao poneroječnost in tudi sam je delave poetryčke ali iz用uj tali delave simplicity postainem ob kodizenbe, či maje tudi shr؟ materific vz presid sins Сейчас evoč醫na spًadi de ma zač Robitimag, nežilevomi s destry engineered system, nežli withstand sa knučey izdešil posledni konplet pri postavnih opravč kot je strojevxe, needs Slim deste poslednost 한 na prepreši nazistv lepudding. Preč fedeon, pred zelo lepščno prišločno spawn-g kahkaha na prave brije. SvJata ne ne би dodali barj entire tool. UM o inchesim u proven Latio leha. Sngари na tikro sa kratesh homelessnessnih mot办em vabil si dvundirna krona. In nek.' je pol orientcija revealsi Weregalova v reč cheesecake jo dug violating o zmedi umerega rad za sund fieldodup pred graz sa plenote kondar regionale, pomagni v Mobil stajkih in primografij� needs. Svijtega so bti stay posrelam zakon je dan singa. Cheja mora da ste če sto promene blga pripor Ne zašte monkey objev,ove serce aktivno ne za ayuda, in у 這個 tudi prišljagovati... Nikηben zašantsik, ki ne θαč na kanal, vopr Romneva, kač in v centre אותih, ki bi ne memetimi compelledh ev parti assist. Na ükol, ni zašanje však praté prikering nope Frost. in operacijen system. As we said, those contracts should be created by means of a more low level approach that is by a direct implementation of explicit code, which is referencing CPU functions. In these slides I wrote some considerations about the origin kaj do Pe решил interesujem krajčak, kako obласт complicated Create field, ta kovnenja bi je umendi, nič advances leko logsi. Ako komponente mali stila.ño držav dve opol Ide. in Scottish Live, same sole so so in in platformi, češče in vsi kapabilitje. As we will see later in an example. And many platforms are able to start up out of reset and manipulate elementary IO. Although the primary target of sweet data is already completed, that is create a build system able to produce code, it will be interesting to make sweet data powerful enough in order to support basic operating system primitives. But as I said before, this is without any doubt a big challenge. For example, a possible path for sweet data could be to provide a low level layer, which exposes APIs or system calls, and which in turn can be used by a more advanced runtime system or another piece of software as an underlying base. OK, we can see here the sweet data layout. During this demo, I will show you a Linux machine, but the same considerations will hold for other environments like Windows machine. This is the top level directory, which can be made resident in whatever directory of the file system you choose. The top level directory contains various subdirectories, make file fragments, and configuration files. The subdirectories are arranged by function. So we have, for example, CPUs directory, which contains basic low level code for all the CPUs, a core directory, which contains the bulk of sweet data code, a modules directory, which contains CPU independent code, and the platforms directory, which contains all the targets that are covered by sweet data, RTS directory for the runtime system, and so on. We can browse into the core directory, and we can see some basic units, which can be withered. One of the most useful is the bits unit, which declares basic low level types. Then we have console, which produces output on a character oriented device. It contains programs to print out decimal numbers, strings, memory dump, and so on. We have a malock, which implements a simple memory allocator. We have memory functions, which implements CPU independent and not optimized memory subprograms in a C language style function definition. We have MMIO, which implements transparent operations with byte and word size objects located at a defined address. LLUtils, which contains utilities for byte, swapping, and address scaling. Memory functions, MMIO, and LLUtils subprograms are separate, so they could be optimized, and in fact some of them are actually optimized. For example, for the x86 CPU. And thanks to the build system, every unit of Zvitheta could be overrided and selectively optimized. Drivers contains a few drivers, just the minimum, to handle for example UA-RT communication, and a 2000 network interface. Modules contains various units at the higher level, like TCP, IP, and link, and the FAT file system. OK, so let's view an example of an application. I will show you an application based on an emulated PCX86 machine running under QM. Let's take the platforms directory, and the PCX86 platform. We are going to configure a PC machine. We can see that there are various subplatforms, and they are identified by the platform-prefix. Some are real platforms based on true PC-pentium class motherboards, and some are virtual, that is, they are emulated. We choose a QM, our own target, so we can run a simulated environment. We can perform a create kernel CFG command, specifying the platform name, and we also need to specify the subplatform here. Now we can see the kernel.cfg file, which contains the platform definition, and we can perform now a configure command. There is no need to specify the platform anymore, because as we take, I already know which is the target platform, so we can run directly a make configure command. This is a brief recap about the information collected in the configuration phase. We have an SFP runtime system with an SFP profile. Here and here. We are going to use the libraries of the runtime system, because SFP has some real code to be linked into the application, and we can build the system. An error shows up, because the runtime system does not exist. This installation is a plain setup downloaded from GitHub, so there is no prebuilt code inside. We have to build the runtime system. We have to specify the CPU, and we have to specify the toolchain triplet 2. We choose a standard ZUIT-A that will chain, but you are free to use your own toolchain, as long as it provides standard data utilities, like NATMAKE and so on. Ok, this is the runtime system. We can see the runtime system just built. Here are the ADIS files together with the bodies. And we can see also the libraries, and the object files. Ok, we can now restart the build. One thing to say about the X86 target is that it runs with the memory management unit turned on, although it is a very limited quick and dirty one-to-one mapping. Ok, we have just built the system. Now we can build the binary file, which can be used as a BIOS for the emulated PC. We can see how the file size is exactly 128 kilobytes, which is the standard size for a PC BIOS. And in order to run this target, we have now to perform a make run command. This command takes what we specify in the platform configuration file. Going back to the configuration, we can see that the run command is being performed by means of the runs we take the utility, which is a little interpreter to ease the execution of the QAMO emulator. It does include the facilities to run and manage the PC machine being emulated. QAMO exposes two serial ports, the number one and the number two. And the number one is the one chosen by the BSP in order to visualize messages on a console. Ok, the system start-ups. This is VGA screen, the two serial ports. The number one is the console. The system shows the various PCI interfaces found, along with other informations. This is the IOM window, which is one of the utilities provided by the QAMO, distributed together with Zviteida. It's kind of an IOCard put in a slot of the emulated PC, and shows some IOPorts in the formal LED images and XSML display. Ok, this is the network interface exposed by QAMO, a dot 3.1 host. And we are going to stimulate the platform by pinging it to the network address provided by the tuned-up layer. So it's dot 3.2 host. This is the ping sequence, the VGA, the console, which shows the ICMP sequence. Ok. And this LED bar shows the number of network buffers allocated in real time, and the display shows how many buffers are present in the FIFUQ of the network interface. Ok, let's stop. And this LED timer is connected to the system tick, which runs at 1kW, but it lights up only once per second. Thank you all for listening. I hope you have found this video interesting. You can reference the user listed here, and I will be available in the various social channels. You can reach me also by e-mails and comp.lang.aid. I will be free to ask any question. We are starting in a few seconds. Yeah. Ok, the Q&A has started, and it looks like we lost Fer. Yeah. There he is again. Sorry for the technical problem. So now we are in the questions and answers time. So I wanted to ask you a couple of questions. The first thing would be so that the public knows, what is the final goal of SvData, or maybe what did you intend to become or be? From my point of view, the final target, let's call this way, is already accomplished, because I wrote SvData in order to create some form of application in a possible target, even an IBM mindframe. So the focus of SvData is on the framework, on the build system. A little thing to write simple code and test it in a platform. So the rest of SvData library's little application is only a companion to the system. Obviously, I hope that the set of libraries, the primitives at Loloel will maybe grow. So we can see a future in a more systematic point of view, maybe not a true operating system. One thing is that SvData was not designed from scratch. It's more an evolution from a little thing, which is a little Motorola 68 firmware. I gradually exchange C language with Ada, and the thing is now in the current state. But there is not much time, and there is a question. Is there a relation to the Ada driver's library? Can the Ada driver's library be used with SvData? We have about 20 seconds. To be honest, I don't know. I think maybe yes, because some part can be inserted.
SweetAda is a lightweight development framework whose purpose is the implementation of Ada-based software systems.
10.5446/57034 (DOI)
you dependencies for vulnerability discovery and tracking. We heard previously three very interesting talks and had some discussions on how we can use dependence information for identifying and discovering vulnerabilities and as we were discussing before we went live they presented very different approach based on source code analysis, binary analysis, and APIs for accessing the information. We can perhaps follow the same order as in the preceding talks and starting with Marta perhaps could tell us what were the most important reasons from ONURO that can be adapted from other organizations. Sure. What other things would you like to share with us? Sure. I was very interested in all the three talks because they cover different ways of doing things. At ONURO we are doing a source-based distribution. We have the advantage that we do have all the source code and our policy is to include only components we do have source code for. It means that we do not have to enter the binary scanning. However, I was looking into also binary scanning because even if you try to have source code for everything, you have the firmware and you sometimes have other things that just come into the binary form. Also, you can use binary if you already have an application that exists and is written for some reasons. My takeaway is to use the technology that works for your use case. If there are two technologies or more technologies that can work for your use case, if you want to make source and binary do that, you will get more information and then you can maybe find some issues that you haven't seen using only one method. We are working mostly on source but definitely all the technologies they have their place and that was very interesting to hear about them. Thank you. Antoni, you had a small glitch after the end of your presentation during the discussion. Maybe you can start by answering the questions that we had. I remember Hugo asking how we can generate a valid SPOM in OS like RedHead or CentOS or if there is a way to understand the test coverage. We can start with that perhaps. Yes, okay. Thank you. Apologies for the technology. So the SPOM, the tool, the CVE tool doesn't generate SPOMs, it consumes SPOMs. I share the concern of whoever raised the question, generating SPOMs is a challenge at the moment to have it consistent. Maybe talk later this afternoon may help us but the tool currently doesn't generate SPOMs. It would be a nice extension, I'm sure, because obviously it requires a lot of information internally but it doesn't currently do that. There was another question about comparing the tool against a tool called Fact and the OS Binwalk tool. I'm not familiar with either of those tools. I had a quick look at the Fact one which is aimed at firmware. The CVE bin tool is aimed at all applications. It's not just specifically focused at applications or operating systems or firmware. It should be able to cater for any software, defining what any software means. Increasingly, it's trying to look at package managers as well because clearly there's increasingly amount of software deployed through a package manager. The third one I think Martha asked about does it support strict binaries? It takes a binary. If the binary is stripped, then there's less information in it so therefore it's going to find less information that's going to be useful. It will do its best but like I said I think in the talk, if we don't have a checker for the library, it won't find that library. If people strip binaries for good reasons to optimise it for space, then maybe that's not the time to do your scanning. Maybe scan before you do the stripping while it's or maintain both copies. I think those are the three questions I remember. I don't know whether there are any more that I have forgotten. Presume the question regarding strict binaries is more relevant to people who receive binaries, who receive packages in a stripped format. Yes. For example, a device driver and there they cannot do anything. They have to only deal with a stripped binary. Julian, you are online. I see you joined the room. Are you able to join us on the live stream, the live broadcast? Maybe not yet. So returning back to another particular issues you see in the IoT space that prompt you to use this particular approach. What can others learn from what you saw on your show? We can learn a lot of things. Maybe the big takeaway I will have for the audience is that it's important to find the CVs, to know what you have as an issue. But CV is just one part of the story. You may have issues that do not have CVs. There are quite many examples. I am currently dealing with a series of security patches. There are more than 100 patches and they are just eight CVs. But basically when I am looking at all those fixes, they are basically security fixes. So I could have applied for CVs for every single one of them. So what I would also recommend in addition to just scanning for CVs, to scan if they are running the latest version of a specific library or tool. Update all the libraries in your system to the latest version if they have a long-term support version that is best for you. Because then they will be releasing fixes. If they are not releasing fixes, we in a new role, we are backporting to the older versions during the life cycle of the distribution even if the fix is only in the main branch of the project. And there are other distributions doing exactly the same thing. There is a big effort to actually have all the fixes inside the libraries and executables. It is a very important thing. And how about making it easier for IoT devices to actually be updated in the field? Yeah, that's also a big topic because it's one of the main issues that the IoT devices, they are quite often not updated. And at Nero, we are doing, we are going two ways. First, we are working on a generic update system. So you can just grab it in your product, if you have a Nero, grab it, and you can update different types of the system. It doesn't matter which operating system you're using. And then we are including fixes for everything. So you can do a regular update of the source and then you just prepare the updates and send them to the device. This is to limit effort on your site, on the product development side, so that it's easier for you to do. And we expect that it's going to help pushing the vendors to actually roll out their updates. So when you say you refer to the vendor, right? Yeah, yeah, yeah, we provide an observed distribution for the vendors to take to add the applications, customizations, they are all about. But the whole base of the system will have all the security updates during the whole lifecycle. So they do not have to worry about finding out updates of open SSL of other things. We'll return to you after I have a question for Anthony regarding what can be done about consumers who use the IoT devices, if some of the things you're doing could apply there. Anthony, we had a question from Jean-Baptiste regarding how you perform on the Linux kernel, because first of all, a lot of code is not compiled there. Plus, there is often poor documentation and difficult to detect in binary phase, a small change. It might change a couple of instructions, maybe even just a jump. What's your approach there or how relevant is that? I don't know what has happened if we scan the Linux kernel. There are an awful lot of vulnerabilities, a lot of CVEs reported against the Linux kernel. And it is quite difficult, I think, to actually localize whether that vulnerability applies to you, because it's often the database, the NVD database, and we'll talk about that in an earlier session. The data of it is maybe not to find enough granularity to be able to determine whether your kernel actually has that component. It's just against the version of the kernel, not the component within that kernel. Now, I begs the question about the level of an S-bomb. Does S-bombs go down to source level? Maybe it could go down to source level. I don't think they're mature enough yet as an industry to actually have that level of detail. Maybe if you've got a source distribution like Martha's talking about, that's maybe achievable, but most positions, that's not the case. So yes, it's a challenge. If you do scan, just search for Linux kernel, I think in the NVD database, you get four or five thousand CVEs, which is, yeah, and that's an expensive process to go and work out whether you're affected. And another question is, if you have a given binary, would there be a way to understand what your tester coverage is regarding this binary? You mentioned in your presentation that if you don't have a tester, you cannot find anything obviously related to that part. But it would be interesting to know that our binaries covered, say, 90% through testers, only 10%. And so the output of the tool is not very relevant to what we have. It's not something that's being considered at the moment. Yeah, I think as I think I said, basically, just because you get a vulnerability report, it doesn't mean it's exploitable. So yeah, it's not trying to do any execution analysis of the binary against the vulnerability. Maybe that's a very interesting topic for somebody else, not for me. We're working fast to create a call graph of a virus package, so that would be able to feed to such results. Yeah. So Marta, going back to you, and by the way, I would like also our listeners, if they have some specific questions that they would like to discuss, we see there are a number of people in the panel, just go ahead and type them. So Marta, what could be done regarding the end users of those devices? Could somehow your scans that you're doing at least allow consumers to be informed that they are running something vulnerable, put pressure on manufacturers, or allow even consumers to patch themselves systems that are left orphaned? Yeah, this is an interesting question. That hasn't been a subject I've looked it very carefully into, but what I would expect that is possible in case there is a source code package available for the device you have, according to the GPL, there should be in many cases. So in this case, you will be able to run a scan, even at the source level, you will be able yourself to run a scan and find out what is really in the binaries, in the versions that are on your device, and then if you want to support a device that is officially end of life based on that source code, you will be able to do the updates. That's what I would recommend. If you do not have source code for the application, for the device... Source code distributions for each device, you would be able to tell that this and this IoT device are now vulnerable, right? I'm sorry, could you repeat the question because I've... Yes, based on your answer, I understand that if you have the registry saying that this specific gadget that contains this source code, which is public because of GPL, given this registry, you could go through the registry scan the source code and say that these specific IoT devices are now vulnerable, because the source code has not been updated. You know that the source code contains no vulnerabilities. Yes, that's something we could do. I would just note that it means that if we find the vulnerabilities in the NVT database, it means it's not vulnerable to those vulnerabilities. It doesn't mean it doesn't have others that do not have CVs right now. Yeah, of course. Have a comment here. If you find the vulnerabilities in the NVT database, it means it's not vulnerable to those vulnerabilities, it doesn't mean it doesn't have others that do not have CVs right now. Hmm, sorry. I somehow lost you because I saw another confused by tabs. We have your back. I was hearing the echo probably on of the stream in the main room. Yes, I saw a comment on the main room and tried to involve William who is discussing there, but I think it didn't succeed and I got also confused. So, Anthony, what particular advantage do you see in your approach compared to source code? Well, the source code isn't always available. So, I think it's increasingly, increasingly, as you say, device drivers or applications are delivered as binaries and that allows you to look at proprietary code as well, where it's not on the run-open source license, which still exists. So, it covers that. It also means that binary means you would have acknowledged it to the language that code has been written in as well. So, in theory, that means that you don't have to have a scanner for Java, a scanner for Python, a scanner for JavaScript. You can actually look at it, what's executed. Although, I don't know, the tool is now starting to look at package managers because obviously, that is a valuable source of vulnerability management, obviously, support for the ability management. So, I think the advantage of binary is then you don't care about whether you have the source code. It is what's delivered and what's actually executing on your device. Do you think particular binary types are more or less amenable to your method, depending on the language, if it's a JVM, for example, versus direct binaries or particular architectures? Have you seen particular difficult points? The primary binary we look at is the ELF binary, so the Linux binary. So, I think I said in the talk that the tool is primarily aimed at Linux, a Linux environment. It doesn't look at JVM binary. I mean, it looks at, it now starts looking at Java, Java files, and inside the Java files. I wonder why that was because of the log for J. We spent a lot of time over Christmas looking at that, and it doesn't look at DLLs in Windows. That's not the use case that the tool was designed for. I'm sure that's a future enhancement, should that be required? I assume that many of the things that you're doing kind of be transferred to other types of binary that shouldn't be difficult. Correct. I mean, it's quite a simple, quite a simple main algorithm, get file, scan file, check file, and report vulnerabilities. So, it's just a very much, very straightforward, repeated simple algorithm. And we've increasingly put a lot of parallels in. That was one of the Google Summer Code activities that was done in 2020 was to parallelize the activity to make it quicker. People get impatient. So, where particular binary is not supported, I mean, a tester can be for any type of a binary and can be added. Isn't that the case? Or are you first, do you have first have a layer that's understanding often then running a test on a particular section? I think it's just where it just looks for certain. It uses the file, the file command to find out what type of file the is it is looking at. And then it's looking for certain things like, well, is it executable or an health binary in the, it's looking for some keywords. So, that code just needs to be extended to look for new types. Part of what you see, we're going in the future. What are the main challenges we should be looking at? I see one big challenge. Is the code that is copied in into different software? I can see that already. It happens quite often someone copied the whole library into another project or they copied just some files of a library. And those files may include vulnerabilities. If we do not have the metadata at the source level, saying that this is a copy of OpenSSL, that OpenSSL is usually easy because they copy the whole directory and you can see this OpenSSL directory and you can figure out what it is. But you still have to recognize it? Yes, you still have to recognize it. It's even worse when people just copied a small library that is one file. They've put it in some sub directory somewhere. The license is all fine. But still there may be vulnerabilities in this library. People forget that they have to update it with the newer version. So we are missing that metadata to be able to figure out what has been included. And in this case, even for a source-based distribution, we could use binary scanning, in fact, to confirm if there are not some fragments coming from things that we do not have listed in the metadata. So that's a mix of technologies. I think that's something that we'll need in the future. And I would ask both of you, but as a follow-up question, whether there is room in licensing to avoid this issue, for example, for preventing people who are packages exist as a whole to include them as copies, because it also reflects badly on the original developer of the package. If somebody just copy-pays and then they say that there is a vulnerability of this particular package whereas this has been fixed for the past five years. But let me first ask Anthony, your view on challenges and where we should be going in the future. So I think it was very interesting in the first session, first group, about things like package names and versioning. I found that, yes, very interesting that as an industry, we're not particularly consistent. And I think if we could have a consistent names and consistent use of versions, it will make things far easier to be to home in on where the vulnerabilities are. Certainly, I've spent some time trying to put some code in to try and be more robust to try and say to work out, given a package, is that the real-life package? Is it the right version? Have we honed in on the right version? Because you tend to find people are adding extra things to the version numbers because it's specific to the architecture or specific to the distribution and it doesn't conform to the standard that's in the NVD database. So I think we just need to try and be a bit stronger in the metadata that supports vulnerability scanning going forward. And I think that goes for distributions as well, making sure that when we put in the S-Bomb, then actually it is a uniform format. So there was quite a lot of good chat going on, I could see about the Perl proposal. And that would be that. I think if everyone could adopt that going forward, then that might be about me a great step forward. I know this lot of legacy, we're never going to fix the legacy, but at least if we can say from 2022 everyone starts importing NVD, CVEs using Perl and the version, the first version, that would be a great step forward. Thankfully, the March of Technology serves the legacy code problem for us by making it very quickly obsolete, either batteries, die devices, stop communicating because of newer protocols and so on. I see a question by comment by a Jean-Baptiste here that with OpenSSL, the two issues I see is a version scheme not computable. And going back to what Marta said, the copy of the M-Crypt library. So what would be your view here? Unfortunately, OpenSSL is not the only library with that kind of an issue. And it is true that it's one of the very popular ones, so we find a copy of it quite often, but there are many more. LeapJPEG, ZLEEP, and a few more. I do not remember the names right now, but they will come back. There are quite many of those. And I agree very strongly with Anthony about the versioning number, if we can make the versioning number correct between all the parties upstream, distributions, and downstream. And also the security community. So we know which versions are the same one. And then we can identify easily which version includes an issue or not. That would definitely help in all that work. Is a semantic version enough for that, if you think of distributions that are their own patches or enhancements, and these may or may not add vulnerabilities? Yeah, sure. I have good examples. I have been working for, yes, on the Linux kernel. I have seen what the Linux distributions do while backporting to all the kernels newer functionalities. So for some distribution, you have old kernel that has half the functionality of a newer kernel. And then we are trying to find out which issues are exactly in this corner. It does have some of the issues of the old kernel. It has some of the issues of the old kernel. It's really hard to understand. And people are backporting patch by patch. So just a versioning here won't be enough. We need something even more sophisticated to make it work. I know that there are initiatives about exactly this topic. And I'm looking forward to seeing what we could do to be good to have a solution. Definitely. Anthony, what's your view here? Yeah, I don't know. I don't know how we're going to move forward. I mean, I mean, the open SSL one, the versioning of the open SSL is interesting because it's like 1.0.1 in a letter, which isn't standard. So we've actually had to put something in the code to actually explicitly manipulate it. And you don't really want to actually go through, you want to have a product where you're having to handle it specifically. You want it to be generic. So an open SSL, I think, was one of the initiators of why the tool was created because it was embedded everywhere. And people wanted to find which version you were using. I think what I often find is when I do a scan, you end up finding that you find that you've got multiple versions. Like an RPM that has executables inside it, you'll find they're built with different versions of say GNMC as well. So it's all very well-saying at the highest level what the license is and what the library is, but actually are all the components all compatible. And I know you've seen issues with licenses where you have multiple licenses, or maybe we have a similar issue with multiple versions of the library being embedded in different parts of an overall product. So it's not consistent. And that I think is true. That's a bit of work I've done with JavaScript. That seems to be a case as well. So it's njava. So it's not a, it seems to be a common problem where people don't upgrade the product all the time because it works. Well, why do I need to change it? And there's obviously a cost associated with every update and we test, etc. Is that effort worth doing or have the product been abandoned maybe as someone's moved on to something else? So it is a big industry challenge. It seems to be a problem that we've been having for decades and reinventing and stumbling again in each new package manager, in each new distribution system. And I don't think we have come up with a good solution perhaps because it's underlying socially in nature that you cannot coordinate parties that are working independently to use one true version of the library. Right. I see a comment by Jean-Baptiste regarding with versioning schemes with NGIX eventually having created a scheme for vulnerability advisories. And that's indeed a problem, especially if we also consider products that are integrating diverse ecosystems from different package managers. And so on which is increasingly required. You cannot stay tuned and say we're doing only C. You may have some bits of a JavaScript and some Java in the backend and some C libraries underneath. And this is even more complex. Marta, what's your view on such setups? Definitely we have many different environments, things downloading in in the build process, downloading some other modules from different places. We do have that. And also an important thing that we do not always realize how much dependencies our our development has. I was looking into a complete build, the finding out all of the packages that we did we grab. And there were certain packages I had about. I and then I started looking what it is, where it comes from, finding the find out the dependencies. And I was finding packages that were basically not updated in the last 10 years. But they're still used by something. And if you get want to get rid of all those dependencies, that probably do not have any maintenance right now. But they are still used. It's really, really complicated because they are buried in all the dependencies of dependencies and of dependencies. It's really, really hard to do. And it's also a big, big issue that adds even to all of the new technologies that are used in parallel. So you get all those new technologies and the new ways of getting software that you still have all those old dependencies that have that are unmaintained for years already. And you and if you want to secure a product, you have to deal with all of them at the same time. Yeah, I was looking at some per modules two days ago. And this was this case, unmaintained for more than a decade. I was going to say that one of the things that I think we need to start looking at is like the velocity of maintenance. So when a vulnerability gets raised, how quickly does that, does the package get updated, if at all? And then maybe that's a trigger for distributions to say, okay, well, if it's not been updated in 12 months, two years, then we need to obsolete that package and move and find an alternative. And certainly, you look at the log for J, the log for J version one was abandoned in 2015, yet there were still many packages or components were still using it. And at the last year when they were looking where the vulnerable for version two. So how do we, how do we track that? How do we make that more visible that a component is now abandoned because it's been replaced by a better version or the developer has decided to go and do something else because he's, you know, it is, he's no commitment if it's an open source thing, he might be doing it for, because he loves it rather than for any commercial gain. So he's going to move on and do something more exciting, maybe. Well, a case here for industry is not showing the leadership and initiative and putting money for their mouth is that is required, perhaps. But also if we do that, what you're saying and transitively, obsolete everything that's not maintained for X years, then the utility open source systems will reduce. Yeah, it's not an easy answer. But I think it's a debate that we need to start having, because I think open source maintenance, if I think is something that, because so much software is now dependent on open source software, how do you ensure that that that it remains safe and secure? And I will say safe as well, because when you start thinking about IoT devices, maybe into things like healthcare, how do an automotive, how do we ensure that they are not making it, you know, dangerous for other users? I see that we have about two minutes till we are, our panel ends. So any closing comments? Marta, would you like to give your... Sure. I would say that there are still quite many challenges to be able to report vulnerabilities in a clear way. But I see advancement in the recent years. We have tools. We have a push now to have the S-BOM available for so many products. And that makes me really optimistic that we are going to solve those issues. We have a lot of technical debt in those subjects, but with enough funding and enough push from the formal requirements, from the government bodies, from the standard organization, I think we are going to advance really well and we are really going to improve the situation. Thank you. I will follow on from that. And I think the S-BOM, I think, is crucial to going forward. What I would love to see is in the same way you see open source projects define what the license is, I would also like them to also publish their S-BOM as part of their delivery. Because increasingly, I think, organizations are going to want to ingest S-BOMs into their systems to be there so they can then do analysis as part of their risk, maybe a risk assessment for the business. So I would like to see S-BOMs become a standard thing and actually to promote packages which deliver S-BOMs and over packages that maybe don't do a S-BOMs. Maybe some tools to make this easier, right? Yes. You have seen the things where most packages now have a license, you know, that might not be in case five years ago. So maybe that is, you know, we have seen the industry can move. Right. Thank you very much for joining this panel. It has been a very interesting discussion and I hope to be able to meet at next first in live. Bye-bye.
One important use of dependency information is for the identification and discovery of vulnerabilities. The presentations present diverse interesting projects and approaches. We want to understand if the current approaches address the needs of cybersecurity efforts, how they compare to threats beyond dependency vulnerabilities, and how they can be combined, extended, and learn from each other.
10.5446/57038 (DOI)
Good afternoon. My name is Sandro Bonazzola. I'm a manager working for Red Hat and within the OKD World Group, I'm focusing on the virtualization related topics. Today, Simone Tiraboschi, Red Hat Principal Software Engineer working within OpenShift Virtualization Product, is here with me for introducing you to OKD Virtualization. Due to time constraints, it will be a very quick introduction. So if you find the topic interesting, I would like to invite you to join the conversation within the OKD community and learn more about it. OKD Virtualization is the community project bringing traditional virtualization technology into OKD. It's built on top of the CUBIR project and the Hyperconverged Cluster Operator project. For those who don't know OKD, it's a sibling Kubernetes distribution to Red Hat OpenShift. If you're not familiar with OpenShift, you can think of it as a Kubernetes distribution which extends the vanilla Kubernetes with security and other integrated concepts, adding developer and operation-centric tools. The OKD World Group Board is composed by members from Red Hat and the University of Michigan. OKD is deployable on many different infrastructure platforms, from bare metal to cloud. It is suitable for a wide array of deployments from small-edge clusters to massive compute workloads. But for OKD Virtualization's scope, we focus on the bare metal on premiums deployment. So what is CUBIR? CUBIR is a Kubernetes Virtualization API and all its runtime in order to provide a control plane, an additional control plane, to let the user define and manage virtual machines. What do we mean with virtual machines in Kubernetes? Our virtual machines are running in containers and they are based on the standard KVM stack. They are going to be scheduled and deployed and managed by Kubernetes as a native object and they are going to be integrated with the standard mechanism to let Kubernetes provide access to containers and ports. So there is the traditional pod-like software defined network connectivity for them and they are going to be baked off by the persistent storage paradigm that is commonly used in the Kubernetes super systems volume clays, persistent volume storage classes. Why CUBIR? The main advantage, the main benefit of CUBIR is that you can use virtual machines and containers together. CUBIR follows the standard Kubernetes paradigms. So you can use container networking interfaces, CNA drivers, you can use container storage interfaces, CSA drivers and the virtual machines are defined as a via customer source definition so that you can manage them as customer sources to be able to manage them as a native Kubernetes object. Just a few words on the CUBIR textures. This is pretty complex and it deserves a separate discussion. Basically on each node there is a control layer named the vith handler. Each virtual machine is basically a pod that contains LibVirt and QEmo that runs the virtual machine as a process. That pod runs on each, can run on each configurable node alongside other pods that contains standard containers. All of them is managed by an additional API server that provides this additional control plane. Now what is HCO? HCO is the hyperconverged cluster operator. CUBIR project provides virtualization capabilities on top of Kubernetes, but in order to get a full and complete experience you need also sibling components like a storage operator to be able to import the disks of virtual machines and you need a network-related operator to be able to plumb the network connection, to easily configure and plumb the network connections for your virtual machines. So the hyperconverged cluster operator is going to deploy the CUBIR operator and all of those sibling operators, so the virt, the storage, the network ones, as a single install label unit and it's going to provide a single upgrade path for all of them so that they can be upgraded altogether in a controlled way. Then HCO is going to provide a single entry point to let the cluster admin configure the whole virtualization subsystem from a single control point. How can I get it? The CUBIR type hyperconverged cluster operator is available out of the box in operator app.io and it's also available as we see in the next demo in the OKD community operator's catalog. It's almost a single click install. If you're curious about how does it look like when all the pieces are put together, here you can see the installed operator's dashboard provided by OKD. Simone is going to give a brief demo on how to get them installed in a minute. On top of this, a user interface embedded within the OKD console is provided for managing the lifecycle of the virtual machines, the star of this presentation. OKD provides an UI built on top of the functionality provided by HCO and its operands. It's just a screenshot of the virtualization UI. Simone is going to give a short demonstration on how to use it. And if you are already using a virtualization solution, well, we're the machine from other virtualization systems can be imported into OKD virtualization, leveraging the conveyor forklift project. We are not going to show a demo for it due to time constraints, but you can see an end to end demonstration on the conveyor YouTube channel. And now Simone, it's demo time. OK, welcome to the demo. So this is OKD. OKD is already installed on a bare meta cluster. Installing OKD is not part of this demo. OKD is already there. We can see that we are using OKD 4.9.0, which is the latest version from the end of December. We can see that we stole the data on a pretty small cluster composed of three master nodes and with the three worker nodes. All of them are bare meta and are capable of running virtual machines. We also already configured the persistent storage for this cluster. More specifically, we choose to configure it with Rook.io to provide a safe, basic persistent storage. You can see that we have a customer source for safe. It's called a self cluster. We have an instance of it. And in the demo, we can check its status. It's managed by RookSafe. And you can see that the self cluster is in a health condition and it's up and running. And it's able to host the persistent disks for our virtual machines. Now let's go back to the operator-app page. In operator-app, we have multiple catalogs. We can look for operators by name, by keywords. If we type a Qubivit, we will find the Qubivit app because the Azure cluster operator. You can click on install. The first page will let you tune the installation. We're going to install it from the stable channel with automatic updates. Now we see that the operator is getting installed. We cut a bit the recording of the demo just to speed it up, but the install process is really, really fast. We are talking about two minutes. Now we see that the operator is ready and we can create its customer source to trigger the operator and configure our virtualization cluster. Here we have a lot of options to tune the virtualization subsystem. For instance, we can enable or disable specific feature gates. We can configure host devices that are going to be exposed to our virtual machines and so on. This is something that is up to the cluster admin. This is the configuration of the virtualization subsystem for the world cluster. The user is supposed to create a single, the cluster admin is supposed to create a single customer source for the world cluster. Now we see that the hyperconverged cluster operator created the customer source to trigger other operators and it's watching their progress. We see that now the cube virtual converged cluster operator is installing other components. Here it depends on the time, but also this phase it's pretty fast and not as in the video, but it's still pretty fast. Now we see that the product is successfully installed and we can see that under the workloads we have a new entry which is the virtualization UI. It wasn't there before. In the virtualization UI we find a tab to manage virtual machines and an additional tab to manage templates. In our case we want to create virtual machines from templates. The templates are already there, but they don't contain the disk with the operative system that you need for your guest virtual machines. But you can easily provide a new URL to have the cluster downloading it from the cloud. Now I'm filling in the details for the CentOS template. I can do the same on the Fedora template. Also in this case I can choose to import the operative system and the OS image available in the cloud. I have only to provide the URL to download the base image. I can eventually tune it choosing to use a particular storage class. In this case we have only one which is the RookSeth one. We see that the cluster is now importing the base images that we want to make available in our cluster. As you can imagine we have a set of predefined templates but you can also define your customer once. Now we see that in the kubivirt OS images namespace our operators created some persistent volumes to store the base images for our Fedora CentOS-based virtual machines. Now the cluster is importing the disks from the cloud. You can see that there is also progress bar to monitor the importing process. We are importing CentOS and Fedora OS disks at the same time. We cut a bit of the demo. Now Fedora completed and CentOS is still in progress. Now that my templates are ready I can choose to create my first virtual machine starting from one of the templates that is ready to be used. Now you see that Fedora template is green. I click on it. I can customize my virtual machine. I can even choose to create on the fly a Kubernetes service to expose my to expose the SSH part of my virtual machine over a Kubernetes service. Because as I said before by default the virtual machines are running on the border network. As you can see I can also add additional network cards on eventually even on a different network kind. As you can imagine I can easily kill my virtual machine and add additional disks. In this case I'm adding 20 gigabytes of disks in addition to the base disk with the interactive system. Also the disk will be baked by myself. Then I can use a crowd unit or a sysplap or whatever I need to get my virtual machines automatically configured. Now the virtual machine is ready and I created it in the default namespace. As you can imagine virtual machines are namespace objects. So in my cluster I have more than one namespace and I can have a virtual machine container and confine it to each namespace. Now the storage sub system is cloning the disk of my virtual machine starting from the template one. Then we show a lot about the UI but as I said before virtual machine is a native Kubernetes object. So it's represented by a CR according to a customer source definition. The CR is an instance that you can play also with. There is a lot of API that you can use to tune the details of your virtual machines. For instance you can change the mac address, you can play with that, you can also check the status of that specific virtual machine instance. Now my virtual machine is finally running. I can go to the console, it's a VNC console. My virtual machine got configured by CloudInit so there is a randomly generated password for that but of course I can also specify a custom one. I'm able to login via VNC to my virtual machine and I can play with it. If I started a virtual machine probably I already know what I'm going to do with that. It's just a simple demo, just playing with my virtual machine. My virtual machine is available for you. So if we go back to the main virtualization tab we see that the virtual machine is running on the VNODE. It's called a boost-trap machine. This is the name of the node. As you can see I can perform all the standard actions that you expect to be available for virtual machines. I can also trigger live migration of my virtual machine. So the virtual machine that was running on the bootstrap node is now going to be live-immigrated to a different node in the cluster. As you can see the virtual machine is now in migrating status. All of the disk of my virtual machine was back at by Rooksef. They were shared the disks so the virtual machine can be easily live-immigrated and the disks are available for all the nodes. According to the capability of the Stragisap system I can eventually create a snapshot. The snapshotting mechanisms can be eventually completely ended by the Stragisap system via the CSI driver. We also announced OKD adding a cube-virtue-specific dashboard which contains some graphs based on the virtual machine-specific matrix. We introduced a lot of matrix with relevant data for all of your virtual machines. Then we created this dashboard with the most useful graph for virtual machines. You can also access individual data as a each specific metric and you can create alerts based on the values of those matrix. Everything now has been performed in the administration perspective in OKD but as a regular developer that is using my cluster, I'm supposed to use the developer perspective which is this one. Here I can see a simple switcher perspective to the developer one. I see that I can still see my virtual machine but it's still running. I can see it as a developer managing my application in my developer perspective. As I said before, the virtual machine is basically a pod. Here I can see the blocks of that pod. I'm just showing this to see the control that a regular developer has over his virtual machine. Still as a developer, I can choose to create a new virtual machine. In that case, the virtual machine is based in this case, I'm available only to create virtual machines according to the templates that the cluster admin's decided to make available for me. The centers were still important so just Fedora was available now. Here I can create a second virtual machine still based on Fedora. The two objects are now available in my name space. Just a curiosity, if I check the first virtual machine that is still running, now I see that it's running on a different node, namely the worker. So the live migration has successfully converged. I can see that associated with my virtual machine, I see the Kubernetes service to expose the SSH part of my virtual machine over the unnamed Kubernetes service. A different one for my second virtual machine, of course. Now we see that the virtual machine is starting. We can monitor its status. We see that the cloning process is still pending. Another interesting thing is that we're also providing CLI-based tool to interact with virtual machines. It's named VHCTL. It's installed. The virtual cluster operator makes it integrated into OKD so you can get it somewhere in the UI. It's already available in your cluster. Now we can quickly see how we can integrate with a forklift. Forklift is also available as an operator. You can install forklift operator. Forklift installation is in progress. The operator is ready. You should create a customer source to trigger it. Now we see that under conveyor-forklift namespace forklift operator is starting. All the ports it needs to act to import virtual machines and so on. After a couple of minutes, forklift is ready. If you go back to the main virtualization tab, you see that now there is a new button called the import virtual machine that is opening the forklift UI to let you import virtual machines. We are not going to demo it right now. You can find forklift demo on YouTube. If we go back to the perconverter cluster operator, we see that as all the operators managed by the operator or the cycle manager, we choose to install the thermal specific update channel so that we are going to get automatic updates for that. The operator is managing a lot of ports in the cube virtual hyperconverter namespace. As you can see, the most critical ports, so the API servers and so on are running with multiple instance to provide availability. Thanks, Simone. If you want to learn more about OKD, here you can find references for getting in touch with the workgroup. If you are interested in the virtualization aspects of OKD, here are a few dedicated pointers. Thank you, everyone, for joining. If you have questions, we are here to answer them in a few minutes. Ciao!
OKD Virtualization is the community project bringing traditional virtualization technology into OKD. Meet the OKD Virtualization community and learn about it! OKD Virtualization is the community project bringing traditional virtualization technology into OKD. It’s built on top of the KubeVirt project and the Hyperconverged Cluster Operator project. HCO ensures a single installable unit for Kubevirt and its sibling components leveraging the operator framework to provide the easiest way of installing and maintaining this additional control plane. This bundle will ship additional tools required to do traditional virtualization tasks like uploading disk images, host network configuration, local storage provisioning, and other functionality which is not covered by the core KubeVirt. On top of this a user interface embedded within the OKD Console is provided for managing the lifecycle of the Virtual Machines. Virtual Machines from other virtualization systems can be imported into OKD Virtualization leveraging the Konveyor Forklift project. Meet the OKD Virtualization community and learn about it! After this session you will Know what KubeVirt is Know what HCO (Hyper Converged Cluster Operator) is and how it relates to KubeVirt How the OKD UI is presenting VMs A rough idea of the featureset of what this setup provides Seen a demo, and knowledge of how to set this up yourself.
10.5446/57040 (DOI)
Hi everyone, welcome to my presentation. I'm Marcelo Moral from IBM Tokyo and today I'm going to present the Scalability Test of Kubivert to create 500 VMI's on a single node. I will make a short introduction and describe the goals, the background, the experiments and then concludes with some final remarks. Okay, so what's the motivation of this work? Because there is a trend of using more powerful nodes for Kubernetes and powerful nodes means here nodes that has more CPU and memory and additional to that there is also the advent of composable system that has layers with a lot of compute power and storage that can be disaggregated and flexibly allocated to workloads and because of that there is this requirement to increase the number of pods and VMI's and VMS to be running those nodes and in order to fully utilize this amount of resource that will be available on the node. However, creating a large number of pods, number of pods per node impose some performance challenge to the control plane will be both for Kubernetes control plane and then for Kubivert's control plane. Okay, some previous work has shown that properly configuring the Kubernetes control plane it's possible to do and in order to run 500 pods per node in an efficient way. So the question here is if it's possible to create 500 pods per node is it also possible to create 500 VMs per node is it true? So what's the impact in the VM creation latency and what's the impact in the Kubernetes control plane when creating 500 VMs in only one node and what's the impact of the Kubivert control plane and what's its need to be configured to be able to create 500 VMI's and all of those questions I'm going to go through this presentation. So the goal here of this presentation is to measure the Kubivert control plane performance and also to show how Kubivert can handle a large number of requests which means create 500 VMs per node. So in order to do that I defined a burst test that creates a batch of VMs on a specific node. So the focus here will be the performance of the control plane so the data plane analysis that's the performance that VM itself will be not part of this analysis. Okay just in a glimpse view of the Kubivert control plane background Kubivert is an add-on for Kubernetes that allows the user to run VMs alongside containers. So it's composed basically by the vert.pi component that exposes the rest of the points to be able to interact with the CRDs that it's created for Kubivert for example VMI's and VMs. And it's also composed by a controller the vert controller that it's the core logic to make all the components working. So the controller is responsible to watch for the CRDs VM and VMI's for example and create the pod and then make the VM component running in the cluster. So the the vert controller is also responsible to create the vert launcher which is actually where the VM object the VM runs. So the vert launcher it's where the libvert demon it's running inside and then it creates the libvert domain and it starts the VM. And the vert handler it's a demon that it's running in now worker nodes and basically it's managed VMI's as Kubelet managed pods. So it's managed off the lifecycle of a VMI. Okay so what's what will be the experiments and how it's configured. In order to create a large number of VMs we need to use some is a small operation system to allocate few resources possible so that we can be impact a lot of VMs in one node. And to further minimize the resource usage I try to allocate with the resource request and limits less resources possible just the resource that is enough to be to create the VM the libvert domain and start the libvert domain. It's putting the operation system it's not necessary especially because it will not introduce any load to the Kubelet controller plan. Okay so what's the test that will run. So run some burst test that creates a batch of VMs and wait for them to be created. And I vary that for 50, 100, 200, 300 and 400 VMs. I also tested 500 but it was not possible to be created since it introduced too many loads too much load in the system so and then it was not working properly. Between it experiment I have also a cool interval that of 30 minutes to allow the garbage collector work. I will talk more about that later and I create the VMIs with a rate of 20 requests per second. So I use the tool KubBurner so I extend KubBurner to create VMs object you know KubVert object and collect some detailed latency information that I will describe later. Regarding the configuration of the system so KubVert is configured in order to create more 400 VMIs per node. It's configured the virtual handler maximum device so before this experiment actually this configuration was hard coded of 110 which is the default maximum amount of pods per node but now it's configurable now and actually by default it has 1000. Also when we were creating many VMIs in only one node it has some slowdown and by default the QMU timeout to create to deliver to interact with the QMU the QMU timeout was the virtual launcher to interact with the QMU was 240 seconds I increased 900 seconds to be able to create 500 VMIs in the system and I also increased the virtual controller curse per second and burst configuration for the clients that make requests to the Kub API. By default it has five and ten only and I increased to 200 400 and I will show this the comparison between the default and the custom rate limited configuration here. So and the the kubernetes configuration I also need to increase the KubBla at max pods I increased to 1000 and I also configured that with the Kub API curse per second and Kub API burst increased that for 1500 and it will be especially impacting the KubLat here. So the cluster configuration and the experience was running the IBM cloud bare metal nodes and those nodes are used in the KubVirt CICD system to run performance tests now and it has 48 CPUs with a large a large node that we can be impacting more VMIs on that. Okay so as I mentioned to you I'm running two different scenarios one that has the default rate limiter which has you know only five curse per second and then I increased that with a custom rate limiter for 200 curse per second. I run two different scenarios and all of these scenarios I vary you know the number of VMIs from 50 100 200 200 and 400 and also has some interval between each X each run. Okay so regarding the results the most important metric here to understand the performance it's the VMI latency how long it takes to create the VMI so from create the VMI object up to the VMI is ready it's the finance ready which means the libvirt creates the domain and starts the domain and we can see here also the the latency breakdown which means when it's creating a VMI it has many you know many phase and conditions for example when the VMI is created the vert controller also is you know scheduled the request the creation of a pod and then the pod is scheduled and then the VMI will be scheduled as well and then they have also some synchronization between this phase of the pod to the VMI and how affects the latency and we can we can see here you know that the the rate limiter impacts differently when we are using the default and the custom one is mostly especially for the scenario with 400 but we can see the difference the performance difference in all the scenarios so considering here the p99 VM creation latency breakdown and we can see when it was creating 50 VMs 50 VMI's it's by increasing the rate limiter it's improved you know 32 percent and then it was varying around 20 improvement and only in the scenario with 300 that it got worse performance but I would discuss that later what happened here and with the scenario with 400 VMs it's actually got 50 improvement in the performance here in the VMI latency okay so and additionally we can see in the break in the breakdown here the latency breakdown where the latency is so we can we can see that the most you know important points here where the latency is related is when the pod it's initialized so the pod is created and initialized so it's the time that creating the VM domain it's waiting for that for the virtual handler sender also they start requests so it's also all just processed and after the VMI domain is created in the libver it's also need to you know start and be recognized by the controllers that it it's running now so and then we can see that the VMI ready latency is also high here it's those latency here that are the most important latency that we can see in the VMI creation okay so regarding the the what's happened when we increase the rate limiter the cars per second in the virt controller especially what it's impacting so it's increasing the the throughput so it has a higher throughput in the request the rest request so we can see that I'm a higher rest rest request rate here when we increase the the the cars per second also it's because it's you know it's being able now to do more requests the kube api is also speeding up you know the processing time in the work queue so it's able to process more faster and more you know queues in the work queue and improving the overall performance of of the kube virtual components okay so now why what's happened to the scenario with 300 vmi's with the custom rate limiter so it shows some slowdown processing some events we can see here in the vert handler it has like a big spike here in the work queue also there are a lot of you know uh retries especially in the vert controller vmi and the in the vert handler also there are a lot of retries here so it's something it's happening and not properly working as compared to the other scenarios and to further you know try to understand that we can also see some big spikes in the vert api regarding the cpu usage and that's happening to the scenario that wasn't happening to other scenarios and also in the work queue the kubernetes work queue is more at rate regarding the admission quota controller so it's doing more requests in both the kube virt and kubernetes api here okay so and what's the the main reason about that it's uh impact from previous execution so we can see that the scenario that was running to uh creating 200 vmi's um it's got a lot of you know storage operation errors and it's it's related those errors are related to amounts to delete the vms in fact i i have the problems to many vms uh vmi's were uh objects were stuck in the system and i need to force delete them and i also we can also see that this um you know forced delete because so the crg finalizers were not being removed from the object and we can see that kubernetes uh crg finalizers had some slowdown here especially in the work queue we can analyze that it was um you know to taking the 20 seconds to process crg finalizer and also the the longest you know uh thread that it's running that also taking more than two minutes for example here which actually it was very slow this process um and was slowed down the deletion of the crg object and because of that it was also impacting the the next experiments okay finally i want to conclude here uh saying that it's regarding the the overall resource usage of the cluster it's being uh you know properly you know using all the resource it's as expected and except from the the scenario with 500 vmi's where we can see some spike in the cpu usage and which is actually expected also because we are overloading the node okay so find the final considerations in this work we demonstrate how to configure kubernetes and kubernetes to be able to create more than 110 vmi's most specifically to create 500 vmi's per node and we also show that the resource node is not the only limitations but the counter plane you know performance can also be a limitation when we want to create more vmi's per node because they can be heavily overloaded and um we also demonstrate that increasing the vert controller curse per second and burst improves the performance we also describe it that there is like a high latency between the pod is ready and the vmi become ready which basically related to the performance of creating the vmi domain in the libvert and how the controllers reconcile and make the vm state ready and we also show that previous experiments impact the performance of the subsequent executions questions you okay can you hear me okay yes i'm not the main uh let me check this oh okay it should be fine now okay i'm seeing here one question it's the three minutes uh cool interval between the experiments so this is basically uh you know when we delete everything uh all the vmi's or the kubernetes object it takes sometimes for the garbage collector to remove these objects from the tcd for example and and also uh to uh you know decrease the the the vm's uh sorry to decrease the the heap uh you know memory usage for the garbage collection works so it depends on manufacturers when the garbage collector is actually triggered and because of that just for a safe march to avoid as much as possible some experiments to interfere the performance of the other i easily 30 minutes but as you saw it's actually happened that one of the scenario um i experiment uh you know i previous execution uh had some fail failures and actually impacted in the performance of the subsequent show experiment so that's was the the main reason for that okay um i'm seeing here another question about how much overhead is running on libvirtdemon per vm okay so it's well this is interesting it's um it's basically the overhead that's more important here i would say that it's for the memory so in my case each vmi so i think i have it in these slides uh my case each vmi i think it's using 200 uh something megabytes of memory additional to the via the the memory utilization of the vm just for uh locate for the overhead so libvirt and the other pods that are running alongside the vm so and for the for the cpu itself um it's uh i don't remember now that the the the amount of cpu overhead but it's short because the libvirt is only uh using more cpu when it's actually creating the domain and starting the domain and uh after that the all the the cpu is just more related to the to the vm itself but the the memory overhead it's actually it's something that it must be taking in consideration any other question okay so regarding the vmi creation latency that someone is asking it depends so depends how many vms are you creating at the same time so for example in my experiments i have this uh i would say high high load of 20 vms creation per second so it's a lot of vms being created per second so it's when i create uh only 50 vms it's around uh one minute to create each vm in the worst case and however it's uh i think it's lost however when we create more vms it takes more time to be created it's not like pods that it's um it's constant the vm creation time the vmi actually in linear increase with the number of vms to be created the time to create the
As the number of VMs per node gets larger, using more powerful nodes (i.e. with more CPUs and RAM), the scalability of Kubevirt's control plane becomes a bottleneck, slowing down the VMI creation process. This talk will cover the motivations and concepts around general benchmarking of the KubeVirt control plane, as well as explaining the journey to running a density test with hundreds of VMs per node. Kubevirt's performance and scalability are determined by several factors. As the number of VMs per node gets larger, using more powerful nodes (i.e. with more CPUs and RAM), the scalability of Kubevirt's control plane becomes a bottleneck, slowing down the VMI creation process. This talk will cover the motivations and concepts around general benchmarking of the KubeVirt control plane, as well as explaining the journey to running a density test with hundreds of VMs per node. In addition, I'll provide some performance metrics comparing VM build time in various scenarios. Participants will have a high-level knowledge of the on-going KubeVirt's sig-scale community performance assessment and the single-node scalability characteristics of KubeVirt.
10.5446/57041 (DOI)
Good afternoon, my name is Miguel Duarte. I'm a software developer working for Red Hat on the OpenShift Virtualization Networking team. OpenShift Virtualization is the downstream distribution of the Qvert project, essentially a virtualization plugin for Kubernetes, allowing users to interconnect both pods and virtual machines in the same orchestration engine. Here today to present a talk about network interface hotplug for Kubernetes and for Qvert in Fosdems virtualization and infrastructure as a service developers room. Let's start with the agenda for this presentation. To prepare the audience for this talk, the introduction section must feature brief explanations of the CNI, MULTUS and Qvert projects. Once these concepts are clear, we can then explain what our motivation is for hotplugging network interfaces into Kubernetes pods and Qvert VMs and from there be able to specify the problem and set clear goals for the implementation section. Afterwards, we will briefly describe how this proof of concept was developed, explain the changes required in MULTUS and in Qvert. We will then demo the feature and finalize with the conclusions and the next steps for this work. To provide some context to the audience, we first need to address the Kubernetes Networking model. The Kubernetes Networking model is quite simple and according to it, all pods can communicate with all other pods across different nodes, even when directly connected to the host network. Furthermore, the Kubernetes agents can communicate with any pod on the node where they are located. In order to implement the networking model, Kubernetes relies on CNI, which stands for Container Network Interface and is a cloud native computing foundation project. CNI is a plugin-based networking solution for containers and it is also orchestration engine agnostic. This means that Kubernetes is in fact just not a runtime for CNI. CNI will implement Kubernetes networking model by reacting to the following events. So whenever a pod is added, it will create and configure a networking interface in the pod and connect that to the cluster-wide network. On the other hand, whenever a pod is deleted, it will perform cleanup of the allocated network resources. It is also interesting to say that Kubernetes chose to use CNI in a very minimalistic way to implement their network model. They configure a single interface on the pod, which essentially means there is a single cluster-wide network connecting all the pods across the cluster. Regarding how it works, the CNI plugins are simply binary executables hosted on the host file system. They are spawned by the runtimes, in this case Kubernetes, upon certain events whenever a pod gets added or removed as we discussed previously. The input configuration is passed via standard in and is basically a JSON-encoded string. And the structured results is reported via standard out and cached on disk and is also a JSON-encoded string. In this slide, we can see a very simple example of a CNI configuration for a known plugin type, which I will use to explain how the runtime knows which CNI plugin to invoke. This type attribute in the CNI configuration must match the name of binary executable located on a well-known directory on the host file system. Its default location is slashopt slash cni slash bin. It is also interesting to say there are standard keys in the configuration, in this case, for instance you have name, CNI version, type and IPAM, but there are also plugin-specific keys. In this example, the Kubernetes key, which is used to indicate the path to the queue config, is one example. As indicated before, Kubernetes chose to only provide a single network interface per path to interconnect the entire cluster. If for whatever reason you require more than one, you need to search for answers outside the realm of Kubernetes. This brings us to Maltes. Maltes is a meta-CNI plugin, meta in the sense that it will in turn invoke other CNI plugins named delegates. It enables a pod to have more than one interface. It even allows for an end-to-end interface to network association, meaning you can have multiple connections to the same network or connect many different networks, each implemented by a different CNI plugin. After having Maltes deployed in your system, requesting additional network interfaces from it is quite simple. You just have to specify a list of attachments using a special annotation on the pod. Its key is kubernetes.v1.cni.cncf.io. Its value is a JSON-encoded string, featuring the list of network selection elements. The featured example is quite simple. The attachments just state their name. You can use this to specify more complex scenarios, like requesting a specific IP or MAC address for the particular attachment. The JSON-encoded string, featuring the CNI configuration, is found within the Kubernetes data store in an object-type network attachment definition. Maltes will query the API server for the attachment whose name is indicated in the pod's annotations, and then will use the CNI API to invoke the correct binary with the CNI configuration passed via standard in. This object type, network attachment definition, is a Kubernetes API extension, which is provided for and installed by Maltes upon its deployment. In these diagrams, we have two scenarios. The left diagram represents a typical vanilla Kubernetes deployment. Kubernetes will invoke the default cluster network CNI binary, which will create and configure a pod interface, interconnecting it to the cluster-wide network. The right diagram depicts a deployment with Maltes. Maltes is deployed as a default cluster network CNI binary and will, in turn, always invoke a common cluster-wide CNI plugin responsible for creating the pod's primary network. If no other networks are specified in the pod network's annotation, Maltes is just in proxy between the original cluster network CNI and Kubernetes. When additional networks are requested via the pod's network annotations, Maltes will query the Kubernetes API for the attachment information and then proceed to invoke the correct CNI, passing the aforementioned configuration via standard in. The delegate plugin will then create and configure an additional network interface on the pod. Now that we've understood how CNI is used to implement Kubernetes' networking model and how Maltes is used to enable pods to feature multiple network interfaces, it is time to present the KubeVert project. KubeVert is essentially a virtualization plugin for Kubernetes that allows the users to run virtual machines inside Kubernetes pods. It gives users the ability to run, manage and interconnect both virtual machines and containers within the same platform, Kubernetes, following its philosophy and semantics. Good example of this is that the VMs are described using the Kubernetes declarative API. One disadvantage, the most common use case is a migration path from virtualization workloads to a containerized microservice-based solution, where you little by little decompose your existing virtual machines to a microservices-based architecture by splitting the virtualized workloads into tinier pieces that fit containers. Disadvantages of this approach are a single common platform for the development and operation teams. I will now use this architecture slide to reference and explain the most relevant actors of the KubeVert architecture. On the right side of the slide, we have N Launcher pods, each encapsulating the libVert plus QM processes for every provision virtual machine. There is a dedicated pod per node in the middle, running the QVert agent. It will ensure the virtual machine's declarative API is enforced by making the declared state converge into the VM's observed state. Finally, to the left side, we have a cluster-wide virtualization controller pod that monitors all things related to virtualization. This component is also responsible for owning, specifying and managing the pods where the virtual machine is run. Now that the audience understands what CNI and MOLTAs are and also has a basic understanding of KubeVert's architecture, we can indicate the motivation for this feature, specify the problem we're trying to solve, and list the goals for the implementation. The motivation for attaching new interfaces to running virtual machines without requiring a restart stems from the fact that some VMs run critical workloads which cannot tolerate a restart without impact on service. A common scenario is when such a VM is created prior to the network. Imagine for some reason an organization's network topology is updated and the VM running the critical workload must connect to a newly created network. Furthermore, adding or removing network interfaces to running VMs is an industry standard available in multiple virtualization platforms with which KubeVert wants to have feature parity. Given this, we can now define the problem as providing the dynamic attachment of L2 networks without requiring the restart of the workload, whether it is a pod or a virtual machine. The goals for the implementation stage are then to add network interfaces to running virtual machines, remove networking interfaces from running virtual machines, finally a virtual machine can have multiple interfaces connected to the same secondary networks. It is very important to highlight here that plugging an interface into a VM requires an interface to be first plugged into the pod where the VM is running. Now that we've explained our motivation, defined the problem and set clear goals, we can move into the implementation section starting with the changes required on Maltes. Remember that Maltes is a CNI plugin and as a result it is a simple binary executable on the Kubernetes NOS file system, invoked by the runtime upon a certain set of events, adding or deleting a pod for instance. Changes is now required to watch for the pod networks annotations update and then trigger the correct delegate CNI whenever the annotation changes. For instance, when a request for a new attachment is added to the pod's network annotation list, this control loop should reconcile the pod by invoking the delegate CNI with the add command. On the other hand, whenever an entry is removed from the pod's network annotation list, the delegate CNI should be invoked, this time around with the delete command. The big question here is where should we put this controller code? In order to host this control loop code that reconciles the workload pods, we have the first rearchitect Maltes as a thick CNI plugin. A thick plugin is characterized by a client server architecture where the client, the Maltes shim on the picture, is just a binary executable on the host file system. It still implements the CNI API that we've shown previously, but all the heavy polling is executed by the Maltes controller also shown on the picture. The Maltes server side will expose a RESTful API on top of a Unix domain socket that is bind mounted into the host, thus enabling the client to contact the server. The pod reconciliation loop described previously will be implemented in the Maltes controller, thus allowing Maltes CNI to react to custom events. In this case, updates to the pods and network annotations. Now that we've understood the changes required in Maltes, which add or remove interfaces to from the pod, we can now proceed with the changes required in Qvert to extend this connectivity from the pod into the running VM. To do so, I'll start with showing a network diagram of a pod running a virtual machine. As you can see, there is a pod interface created by CNI connected to an in pod bridge, having another connection to a top device that QMU used to create an emulated network device for the VM. A good API for interface hotplug for Qvert VMs would follow the same approach we described for pods, where updates to the VM spec, whether you add or remove interfaces, would trigger the interface hotplug or unplug. Unfortunately for us, that is not possible, since updates to the VM spec are only allowed to the Qvert control plane entities. As such, we have to update the VM spec via a newly added sub-resource, which is triggered by Qvert CLI. When a Qvert user triggers the add interface or remove interface command, it will send the rest put request to the add interface sub-resource of the VMI, whose handler will in turn patch up the VMs interface and networks list on its spec. The cluster wide virtualization controller is continuously monitoring the VMs. Whenever it sees a difference between the interface list on the VMs observed interface status and its interface spec, it will recompute the pods network annotations and update the pods spec with this data. Once the multis controller sees the update of the pods network annotations, it will mutate the pods networking infrastructure by adding another pod interface via a CNI delegate, whose networking must yet be extended into the running virtual machine. Once the cluster wide VMs reconcile loop notices, there are interfaces listed on the pod status that are not reflected in the VMs interface status. The virtualization control loop will mutate the VMs interface status with these new interfaces, indicating their pod counterparts are already available. The control loop of Qvert's agent, which only focuses on the VMs running on the noted managers, will then see this update and act accordingly. Acting accordingly in this context is doing two different things. The first of which is to create all sorts of auxiliary networking infrastructure to extend network connectivity from the pod interface into the virtual machine. This last step will create another input bridge that interconnects both the pods interface, which was previously connected by the delegate CNI plugin, and a newly created tap device. Finally, Qvert's agent will converge the VM specification with its observed status. It will invoke attach interface for new networks listed on the spec and call detach interface for interfaces listed on the status, but not present on the spec. Once Livered processes the dynamic attachment operation, the newly created emulated network device will be available inside the running VM. The last thing I want to address on the implementation section is related to QM's machine type. This attribute can be seen as a virtual chipset that provides surgeon default devices for the VMs, graphics card, ethernet controller, etc. QM supports two main variants of machine type for x86 hosts, a legacy chipset, PC, and Q35. The most modern machine type available, Q35, has limitations to definition. By default, it supports a single hot plug operation. When users require more than one, they must prepare in advance by requesting an appropriate number of PCI Express root port controllers. Our solution for this was to mimic OpenStack Nova's implementation and expose a knob where the users can specify the number of root port controllers they want available on the VM. The first demo we'll see is of a hot plug operation against the Q35 machine type of VM. The first thing we'll do is start our scenario. It will need to update QB-VIRT's feature gates, indicating this non-generally available feature as available to the users. Secondly, it provisions a network attachment definition, holding the specification of the network which will be hot plugged later on. Lastly, it provisions a VM, having a single network interface in it over-connecting the VM to the cluster's network. The top right corner of the shell will be used to monitor the VM's interface status, while the bottom right corner will list the associated pod network annotations. We will need the pod name for that. As expected, the VM's observed status features a single interface and the pod's network annotation features an empty list, meaning there aren't any secondary networks available to the pod. I will now request QVIRT to hot plug an interface in our running VM. The add interface command will be issued, requesting a new interface connected to the dedicated network to be made available on VMIA. As we can see, a new secondary network was listed in the bottom right corner, which triggered multistick to create a new network interface in the launcher pod. After a while, QVIRT proceeded to extend networking from the pod's interface into the VM, and this new interface is now listed as available within the virtual machine. If we try again to use the hot plug feature for this virtual machine, we'll see the plug operation fails and livevert, since there aren't any available PCI slots. In this second demo, we will present the exact same scenario, but this time making sure to request more PCI Express root port controllers. It once again starts by provisioning the scenario. As before, the VM's interface status will be monitored in the right shell. This time around, there is no need to monitor the pod network annotations. As you can see, the only difference between the current and previous scenarios, other than the name of the VM, is a newly exported attribute indicating the number of PCI root port controllers. As can be seen on the left side of the shell, there is a single interface available within the VM. And we will now request a new interface for it via the Q-verse CLI. Again, as you see, on the right side of the shell, we can see the newly added interface after just a few seconds. Let's now try to hot plug another interface, this time with a different name. As you can see in the right side of the terminal, another interface status is listed, corresponding to the newly attached network interface. We finally log again over the console to the VM, where we can see the three network interfaces. This concludes our demo. This last demo will feature the reverse flow, unplugging an existing network from a virtual machine. As usually, the left side of the terminal will be used to interact with the virtualization workloads, while the right side will be used to monitor the virtual machine interface status. As can be seen, both terminals show three different interfaces in the running virtual machine. Let's now invoke the remove interface CLI command to remove one of those interfaces from the VM. As you can see in the right terminal, the hot plug status type changed to appending unplug operation, and after a short while, the entry disappears altogether. When we check the state over the VM's console, we see the corresponding interface ethernet1 was removed from the domain. As for conclusions, the first one should be pretty obvious by now. To plug an interface to the VM, it must first be plugged to the pod. At the pod level, I'd like to highlight that unplugging the default cluster network interface from the pod is not possible, and is entirely off-scope of this feature. Furthermore, plugging and unplugging to from the pod is implemented by Maltes, which is of course a requirement of this feature. Finally, some QM machine types require VM spec updates, indicating the number of PCI root port controllers. Otherwise, the users will get a default, which will leave them able to hot plug a single network interface to the VM. To conclude this talk, let's quickly enumerate the required next steps for this feature. The software we're running in this presentation is essentially a proof of concept. None of the code was actually merged at the time I recorded the video. As such, we first need to get the Maltes code changes merged and afterwards focus on the Qvert code and productify it. And this is all, we've reached the end. I thank you for your time, I hope you learned something, and I will leave you with some interesting resources so you can get more information about this subject. Bye! So, I see here one question. I'm going to read it out loud. We have Fruity Welsh asking if it is possible to implement multiple interfaces on the same CNI, so that multiple IPs on the same VLAN or multiple VLANs on a trunked interface. Yes, it is possible. I have to figure one second. I need to find a way first, okay, to stop this, okay. So again, it is possible to have that, like one of the goals we had from the beginning was that you're able to have multiple connections to the same network. So yes, this is possible.
Design and implementation of dynamic network attachment for Kubernetes pods and KubeVirt VMs. Immutable infrastructure is the law of the land in the cloud native landscape, promising benefits to software architectures run in Kubernetes. … except sometimes the rules must be broken to achieve certain use cases; take for instance the dynamic attachment of L2 networks to a running VM: to hotplug an interface into the VM running in a pod, you first need to hotplug that interface into the pod. This feature is particularly of interest (required, actually) to enable scenarios where the workload (VM) cannot tolerate a restart, or when the workload is created prior to the network. When thinking about strategies for tackling this problem, we faced a recurring question when trying to come up with a modular design to provide this functionality: "should the changes be located in KubeVirt, and thus solve this issue for Virtual Machines, or should we take the longer path and address this issue also for pods ?" We chose the latter, which unlocks dynamic network attachment for pods, thus also benefiting the Kubernetes community. This talk will provide the audience with a basic understanding of KubeVirt, CNI, and Multus, and then propose a design to add (or remove) network interfaces from running pods (and virtual machines), along with the changes required in Multus and KubeVirt to make it happen. It will also factor in a community perspective, explaining how we pitched and got both the Multus and KubeVirt communities involved in a working arrangement to deliver this functionality.
10.5446/57043 (DOI)
Hello, welcome in this presentation about adding TPM support to Overt. My name is Milan Zamasal and I work as a software developer at Red Hat. I participated on adding this feature and I'd like to share with you some experiences from its development. Red Platform Module or simply TPM is a piece of hardware provided some security related functions. It can be typically used for things like managing secret keys and data and storing them into secure memory. In virtualization it can be an emulated device serving the same purposes. Its presence is required by recent versions of Windows, but it can be also used for managing disk encryption keys or other purposes. Overt is an open source distributed virtualization solution based on QMU, which is an emulator of processors and other hardware, and Libvert, which manages virtual machines or simply VMs running using QMU or other hypervisors. There are many components involved, but the two most important ones from Overt are the Overt engine. The central virtualization manager and with the SM handling the hosts managed by the Overt engine. Why to talk about TPM in Overt? Because it is a good example of a feature that may look easier from outside than inside. You can look under the hood here and see how the feature is implemented and why some things work the way they do. I'm not going to talk about TPM itself or about how to use TPM in Overt. The letter has already been presented at the Overt conference last year and you can use the link here if you are interested in it. Adding TPM to a virtual machine is easy. It can be added to the Libvert XML description of the VM using this snippet. But is it really that easy? Unfortunately no. The nature is complex and we cannot fully understand even the purest science which is mathematics. At least in my region we talk about three big crises of mathematics. The first one was when Pythagoras and his companions had trouble to know what the length of the square diagonal is because they knew only integers and their fractions. And the contemporary world couldn't exist without calculus. And still calculus was used for about 150 years without good foundations and actually understanding what infinitely small is. And today we know that any theory including natural numbers cannot be both consistent and complete. Either there is a claim that is both true and false or there is a claim where we cannot decide whether it is true or false. And there may be future crises we don't know about yet. Computer practitioners work with numbers a lot but they often deal with problems such as off by one errors and they don't need advanced mathematics for this. So many problems can be resolved by just giving up on completeness and reducing our requirements. Does it mean we can handle software better than maths? We all know the answer. This is how one of my computers welcomed me this year. It stopped working properly and I found out that all my file systems are rate only, apparently due to some bug in a hardling handling in the butterf s. As in many other cases, reboot helped. Until the error has occurred again. But if you can still watch this stream it means that software at least partial works and it's not actually that bad. So will TPM work with just the simple device definition? In some sense yes, unless we want additional functionality such as to be sure that the VM actually starts retaining TPM data across VM restarts or having a proper user interface. What can prevent the VM from starting? The first thing is that TPM requires UEFI firmware, it doesn't work with BIOS. It is not a big problem but it must be addressed in the user interface including all the dependencies between different options and additional considerations such as different architectures. Even with UEFI firmware, the VM won't start if software TPM emulator is not present on the host. The emulator is needed only if the VM contains a TPM device. That means QM and LibVert don't depend on the corresponding package directly and we must add the given package dependency to Overt VDSM. And we must guarantee that the given package is present on hosts in the given cluster. For similar purposes we have the concept of cluster compatibility versions in Overt. TPM could be used on one's new cluster version with this feature was introduced and users upgraded their hosts to that cluster version. One more thing to the device definition. Here is the device definition we originally used. Can anybody spot what's wrong here? It is based on the implicit assumption that LibVert will select the best model for the given TPM device parameters. But this is not how LibVert works. LibVert is strictly backward compatible, which is good. In this case it selects the original alt model and we want to use the more moded model and we must specify it manually. Okay, the VM is running with the right device and the device is listed among other VM devices in the web user interface. The device is different from the other devices there and needs its own icon. What's the problem is adding an icon. It requires communication with the graphic designer, explaining what we need, discussing several proposed icon versions and so on. It's not something especially tricky, but it still requires a significant amount of work. And sometimes little isolated things to do can sit aside for a long time without getting attention. Now to the bigger problems. When a VM stops and is deleted from the host, its TPM data is deleted too. When the VM starts again, we don't have the secrets stored in TPM anymore. So we must find a mechanism for transferring the data between hosts and the central virtualization manager. It's not difficult to come with many ideas how to do it. We can use share storage for storing the data or we can use a central database or we can even attach the data to the XML definition of the VM. We can use events or we can change API calls to pass the data around. The problem is that each of these ideas has its tradeoffs and it's important to select a good approach from the beginning because once the selected solution is implemented, it's very difficult to change the overall mechanism to a different one. So this is a complicated decision. We know there are harder problems like making the code pretty and not ugly. The problem is we can have a piece of code that is both ugly and not so ugly, or code where we cannot agree whether it is ugly or not, all that without proper definition of ugliness. Developers can spend a lot of time discussing these philosophical issues and then be left with no time to implement features such as TPM. Fortunately, this didn't happen to TPM and we hopefully selected the right mechanism. That means the data is stored in the engine database and we pass it around using modified or newly introduced API calls. A temporary problem was that we had to require a new cluster compatibility version again because of the changed APIs. A good time to retrieve the TPM data is when the VM is no longer on the host but it is still present there. At the time, we can be sure that the data is final, it cannot be further modified and we can safely take it. Then we can call the VM destroy API call that deletes everything about the VM from the host. But what if something bad happens and the data retrieval API calls fail all the time? Then we would never call VM destroy and this can cause a lot of other problems. So we decided for a compromise solution. If the data retrieval calls fail several times, then we call VM destroy anyway. Then contingent TPM data modifications are lost. But this is not very different from real hardware. If hardware dies, we lose all the data stored within it. Proper backups are needed whether we use real or emulated hardware. The VM destroy call stops the VM if it is still running on the host. This is used for an implementation shortcut for power off operation in the OVET engine without proper shutdown. In such a case, there is only a single VM destroy API call and we have no space to retrieve the TPM data. Since it would be difficult to change the current powering of mechanism in the engine, we must retrieve the data before the destroy is called. That means the data may be inserimodified after. But again, it's not very different from real hardware where hard pressing a power off switch button is not a safe operation. Is it all what was needed? No, we mustn't forget about irregular flows. What if the VM crashes or the host disappears? When the VM crashes, it's no longer running on the host, but it's still there and this is the situation we already handle. A lost host is a different problem. We may lose all the TPM data modifications performed while the VM was running there. In order to prevent this, at least partially, we predictly retrieve TPM data while the VM is running. In order to avoid unnecessary API calls and data transfers, we attach a TPM data hash to other API calls and the engine retrieves the data only if the hash changes. Was it all what was needed? No, we read the TPM data from the software TPM data directory and if software TPM writes the data at the same moment we read it, then we can get corrupted data. Looking into software TPM sources, we could see that the problem is real and thought about ideas how to improve it. We send the idea to the software TPM developer on Friday afternoon and I was very pleased when FX was implemented and merged just a couple of hours later. Well, if you send a nice bug report, you increase your chances to get a nice response. Was it done? Unfortunately not, because there was still an old software TPM version in the host operating system. So we had to implement a workaround, watching follow-up data changes and reporting the changes only if they are stable. And there is still an additional risk that software TPM may change its data format in future versions. It would be nice if software TPM could still read the old data format in such a case, but there is no guarantee about it and we must check that in the future. Oh, and snapshots. Even the problem with my file system I mentioned before was related to snapshots. Is there anybody here who uses snapshots of any kind and has never had any problem with them? At least deleting a different snapshot than intended by mistake. How is it with snapshots and TPM? There is no problem with offline snapshots, the VM is not running and we can copy its data safely. The only question is where to put it and how. We decided to put it into snapshot OVF. But how about live snapshots? When we take the snapshot, the guest operating system is running and can in theory modify the data any time. And it's difficult to get TPM data exactly corresponding to the snapshot. So we attach the last data retrieved periodically to the snapshot and users must take this into account. Is it any different for snapshots with memory? There are some subtle but important differences. First, if the state of the TPM emulator doesn't correspond to the state of the VM memory, then the guest OS can fall into trouble, which is bad. And second, while the VM memory is being dumped to a snapshot, the guest OS is stopped, which creates a good opportunity to copy the TPM data safely. Unfortunately, LibVid doesn't have support for this. So we had to disable snapshots with memory for VMs with TPM until this is fixed, which is not very good. And here is a fairly complete picture, what all has to be considered regarding storing and transferring TPM data. So we are almost done with TPM data. Except for security, we may not expose the data. And one of the places where it can easily appear is logs. Not that we would like to log something like TPM data is this and this, but it can appear there as part of other data, for instance, API call arguments or responses. And this must be prevented. But not only that, let's not forget about irregular flows again. An unexpected error may occur and it may log trace back together with data to ease the debugging. And secret data must not appear there. It's advisable always check the logs. It happened to me once that I implemented a simple patch that worked, but it's periodically logged a harmless trace back by mistake. And what do people do when they experience some error? They look into the logs and catch on the first error they can see there. So I had to respond to several bug reports I had nothing to do with. The lesson learned. This advantage of an emulated device is that there is no real secure memory. Libvert provides at least a mechanism to protect the data with a passphrase. But with helpers like this or two-factor authentication or other security measures, it's always necessary to consider the context in order to prevent a false sense of increased security. In Overt, we would generally need the passphrase at the same places where we have the data. So we cannot use it meaningfully and use technical measures instead of trust in people with elevated system permissions. When we start a VM on host, we upload its TPM data to the local file system there, and it may be a danger for other data on the host. We must use sanity checks to prevent damages by mistake or by malicious requests. And also check the other components we use. We use tar format for software TPM data, and it may be tempting to use SHUTL Python library in Python code to unpack the data. Unfortunately, the library is insecure and we had to use GNU tar, which can provide the necessary guarantees. The virtualization part is almost done, but how about the guest operating systems? In order to avoid user confusion, we allow enabling TPM only for the guest operating systems we know about that they work well with TPM. Users can enable TPM support for other guest operating systems if they wish. There were many other things that had to be done in order to have a full TPM support in Overt, including TPM support in templates, in the REST API, and providing a comprehensive web user interface, including things like asking users before deleting TPM data. TPM is often good in software development, and we implemented TPM in such a way that it could be reused to support secure boot persistent data in Overt. Although the features are a bit different, it was relatively easy to add secure boot data persistence. As you could see, adding a proper TPM support to Overt was a complex task, despite it could be built on top of a big work already done by Libvert, QM, software TPM, and others. This wasn't unexpected, but there are always surprises in software development, and we had to deal with various challenges. I hope you enjoyed the story, and perhaps you can use some lessons from it to further enhance virtualization features in Overt or other products. The list here is nothing unknown to any experienced software developer, but it never hearts to repeat these basic rules, like paying attention to documentation rather than making implicit assumptions, always looking into the logs, paying attention to security, not forgetting about irregular flows, sharing where possible, and not making mistakes that will cause trouble later. But foremost, having the users on mind and cooperating efficiently towards reaching the final goal, although it will almost never be fully perfect. Thank you for watching this presentation, and if you have any questions, you can ask them here now or anytime later on the development and users over to mailing lists.
oVirt is an open source virtualization solution based on kvm, QEMU and libvirt. Trusted Platform Module (TPM) device support, which brings new security capabilities that modern operating systems utilize or even require, was added to oVirt recently. In theory, adding TPM support should be as easy as just adding a TPM device to the virtual machine libvirt XML. But features built on top of a lower-level virtualization platform are not always as easy to implement as they may initially seem to be. This talk will present the challenges experienced when adding TPM support to oVirt. The talk will explain that a supposedly complete feature support in libvirt/QEMU may still require challenging design considerations. What can be used easily in a simple virtual machine running on a desktop computer may not be enough to get the things working well and reliably in a virtual machine management running across many hosts. Some of the challenges experienced with TPM support have been sorted out while other ones still wait for a good solution. Although focusing on TPM, the lessons presented in this talk can apply to a wide range of features. Whatever we work on, we cannot be just passive consumers of features but we must look for the right ways of using them and be proactive in avoiding pitfalls.
10.5446/57045 (DOI)
Hello everyone and welcome to this talk about tracing covert traffic with SDO. My name is Rekim Rasgill and I'm part of the Covert Network team. You can find my contact information at the bottom left. Without further ado, let's get started. First let me go over the outline of what I've prepared for the presentation. I will start off by introducing basic concepts of SDO and Covert projects. With the basics covered, we'll take a deeper look into Covert and SDO in-pot networking. Having understood how Covert and SDO route packets in Kubernetes pods, we will be able to identify challenges when trying to use Covert virtual machines with SDO service mesh. We will then see how we can adjust Covert and SDO network routing rules, so that we can enable Covert virtual machines to work with SDO service mesh. Finally, I'll demonstrate SDO monitoring capabilities with Covert virtual machines on debugging and network issue. Alright, SDO service mesh. Well, what is a service mesh? Service mesh is a layer of infrastructure transparent to the application that provides additional features like traffic management, monitoring, load balancing, or additional security like MTLS. What's important is that these features can be rendered by the applications, instead of being re-implemented every time by each application. Well, how does it do all that? SDO deploys a sidecar container with an Envoy proxy to each Kubernetes pod, and configures IP table net rules so that all traffic is first sent to the SDO proxy container, and only then to the original destination. This gives SDO proxy containers the ability to intercept all traffic before it's forwarded to the application or to the outside world. And while it's doing so, it's also able to report a lot of metadata about the traffic that's being procced. All the proxy sidecars are managed by the SDO control plane, which is responsible for distributing user configuration to the proxies. So for example, you could allow pod B to communicate with pod A, but nobody else. Another typical use case for SDO are binary releases. As you know, the globe represents all your customers. SDO can route the majority of customers to the stable version, let's say B1, and only a small portion of requests to a new version. As confidence in the new version increases, we can also increase the portion of requests the new version receives. So these were some basic concepts behind SDO service mesh and how it utilizes proxy sidecar containers. Now let's take a look at Kubevert. Kubevert is a Kubernetes extension that allows running traditional virtual machine workloads. The idea of Kubevert is to have a common roof for running containerized and legacy software. The current expectation is that developers will continue to gradually shift toward the architecture of distributing microservices. But also that there will be a lot of legacy software for which there won't be a motivation or even a need to redesign and re-implement. And that's where Kubevert comes in. Since using SDO service mesh becomes a common practice for distributed applications, we can expect developers may want to use SDO for legacy software running in virtual machines as well. And we'll see how it can be done in a minute. But first, let's focus on understanding how Kubevert configures the network in pods when using a so-called masquerade interface. On the left side, you can see a Kubernetes pod with a container which represents the QEMU process. Or in another words, the virtual machine. What happens next after creating the pod is that Kubevert creates a bridge and a tab device assigned as a port of the bridge. This tab device is then handed over to the QEMU process that creates a virtual interface from it. The bridge and the virtual machine are assigned IP addresses by the ACP server. You may have noticed that there is no connection between the bridge and the pod eth0 interface. This is because there is no auto connectivity when using the masquerade interface. The traffic is being routed using not NF table rules. But before I dive into describing how inbound and outbound traffic is routed to and from a virtual machine, let me remind you how netfilter not table chains are traversed. When a packet enters through an interface, it first traverses the pre-routing chain. If after pre-routing, the destination is a local process, it continues through input to the local process. If not, it goes through post-routing chain and finally through the output interface. Note that when a local process sends a packet, it goes straight to the output chain and then again continues through post-routing chain and through the output interface. Now let's take a look at how Kubevert virtual machine receives inbound traffic. When a packet is sent to the VM, it first enters the pod by its eth0 interface. As it traverses the not table chain, it hits destination natural in the pre-routing chain. The input interface name is indeed eth0. So we dnet the packet to 10.0.2, which is the IP address of the VM. After that, the packet can be delivered to the VM. Let's see what happens when the VM tries to communicate to an external service. A packet with destination 1111 is sent to the bridge because the bridge acts as a default gateway of the VM. Then, as the packet traverses the not-routing chains, it enters the pre-routing chain. Since the packet has not entered the pod through the eth0 interface, there's nothing happening in pre-routing chain. And since the destination is not a local process either, we move all the way to post-routing chain. In post-routing chain, we hit masquerade rule. Masquerade is a special form of source NAT. It changes the source IP address of the packet to the IP address of the exiting interface, which in this case is eth0 with IP address 1234. After that, the packet is sent out. We need to change the source address so that any replies are able to return back to the pod and then back to the VM. Now we understand how the virtual machine communicates with the outside world. In the bound-trafate use is destination NAT, while outbound-trafate use is source NAT. This mechanism is something we call masquerade binding mechanism. Kubler provides some other mechanisms like bridge binding, for example, which is very similar. In bridge binding, there would be a connection between eth0 and the bridge. It works by moving the IP address from the eth0 interface over to the TAP device. There are no NAT rules involved in bridge binding. The NAT rules are the reason why we're talking about masquerade, because it allows us to mangle the traffic the way we need. That's something we are going to need for Istio. Speaking of Istio, let's see how Istio proxy routing works. On the left side, you can see a Kubernetes pod with an application and two containers injected by Istio. Istio proxy and Istio init. Istio init configures IP table rules on the pod and exits. Istio proxy is the actual proxy, the invoiprocess, that lives throughout the lifetime of a pod. The inbound traffic is represented by the green envelope. On the right side, I'll try to illustrate how the traffic traverses the NAT routing chains as we go. So starting with an inbound traffic represented by the green envelope, it enters the eth0 interface and enters the pre-routing chain. In the pre-routing chain, it hits a redirect rule that forwards the packets to localhost on port 1506. Redirect rule is a special case of a destination NAT. It basically changes the destination to a localhost on a specific port. In this case, it's this 1506, which is a port to which Istio proxy listens to for inbound traffic. Now, since the Istio proxy is a local process, we go through the input chain, but this chain is empty, nothing happens here. So the packet is delivered through Istio proxy. Now, the proxy does whatever it is it's doing with the packet. It sends metadata about the traffic, like HTTP return codes. And when it's done, it sends the packet out to the original destination. Now, how does it know the original destination? Because we have changed the destination in the packet. Well, it learns that from reading a socket option called SOOoriginalDST. This socket option is populated by the redirect rule. When the proxy sends this packet out, we enter the output chain because proxy is a local process, so we're not going through the pre-routing now. In the output chain, there is a cool trick that prevents the traffic to end up in an infinite loop. And it's this UID return rule. So as we enter the output chain, we evaluate this rule, which is that if the socket is related to UID 1337, which is the UID of the Istio proxy process, it's always this one, we return. We're not continuing to another redirect. This serves to distinguish the traffic that is already proxied and traffic that has been sent by the local applications. So we're not performing any further actions with this proxy packet, and it's delivered to the application. Now the application receives the packet and presumably it will respond to the original sender. So again, application is a local process, so we start with an output chain. Now the application UID is not 1337, it is some random number. So this rule does not match and is not performed. So we go further down the output chain and hit the Istio redirect and hit the jump Istio redirect rule. And in the Istio redirect chain, there is a redirect rule to 15001. This is another part that the Istio proxy listens to, and it's listening for all outbound traffic. So again, the packet is delivered to the proxy, processed by the proxy, and sent out to the original destination. Since proxy is again a local process, we go through the output chain again, but this time we identify that the packet has been sent by the proxy because the socket is again associated with an application with UID 1337. So we're not performing any further actions and we return from the from the output chain so that the traffic or the packet can be sent out. Of course, before it is sent out, it would traverse the post-routing chain, but since there's nothing happening in the post-routing chain, I'm not showing you here. Lastly, you might have noticed that there are some dots in the output chain. There are actually some other rules that are added by Istio in it, but these rules that I'm leaving out here are related to internal communication between the proxies and Istio control plane. Now that we understand how Kupvert and Istio networking works, we can see what happens when we try to use both together. Kupvert will try to de-net all traffic at the pre-routing chain into the VM while Istio tries to redirect it to the proxy process. Now this is not going to work. What we need to achieve is to allow Istio redirect all the traffic into the proxy and only after that hover the traffic to the VM. We can do that by removing the de-net rule from pre-routing and add it to the output chain instead, but we can't do only that because when proxy sends a packet, it sends it from localhost to the original port on localhost. If we change the destination IP address to the VM, the VM will be able to receive it, but it won't be able to reply because the source IP address is localhost. So it will reply to localhost, but it will be localhost of the virtual machine, not the pod. So we need to source net and set the source IP to the IP address of the bridge. And we do that change when we detect that the virtual machine has enabled Istio injection. For the outbound traffic, this part was actually implemented by my colleague Sebastian Czekman. I'm linking his PR in the bottom left, so all credits goes to him. What his PR does is that Istio looks at this annotation of a pod and if it finds it, it adds these two rules to pre-routing chain. And what these two rules do is that they check if the traffic is coming from this interface, k60-e3h0, which is the name we use for our bridge in the pod. And if it does, it will jump to the Istio redirect chain, which is the chain for outbound traffic. While we can do that, well we can consider all traffic coming from the bridge, which comes from the VM, as outbound traffic. This way we ensure that all the traffic that comes from the VM is first redirected to the proxy and after that it's masqueraded and leaves out. And with that, let's see a demonstration. Let me introduce the topology that I have set up for the demonstration. I have two virtual machines, Federal Client and Federal Server. And two deployments, Busybox Client and HTTP Bin. So I have two clients and two servers serving very simple web page. Now let's see how it looks in Kiali dashboard. So this is services tab of Kiali dashboard. We see our two services are listed here, the HTTP Bin service and Federal Server service. The Federal Server service is not really healthy. It says that 50% of inbound traffic is failing. If we navigate to our graph, this is our topology graph. We can see our clients are sending requests to our services. Now this service is quite right. If I click on it and see the flags, I can see that I'm getting 404 codes on 100% of the requests. And this is our Federal Client virtual machine. So let's take a look what's wrong with it. Indeed, we are getting 404 on one of the requests. If you take a look, it's just a simple loop that's sending C URL requests. And one of the links is broken. Let's fix the link and now it works. Cool. Let's go back to our graph. And with that, this should turn green. I'll skip forward because it takes some time. And here we go. After a few seconds, topology has converged to an all-healthy topology. And this concludes the demo. Thank you very much for your attention. If you would like to try this yourself on your local machine, I encourage you to do so. If you follow this link, you'll be presented with a blog post that guides you through the setup of a local Kubernetes cluster along with the deployment of everything you need. That's all from my side. See you at the Q&A.
Software development has been gradually shifting from monolithic to distributed containerized applications. Such applications are composed of components referred to as micro services. With the increasing number of micro services, it becomes increasingly difficult to understand how all the components communicate. This is where Istio service mesh comes into play. Istio allows developers to manage and monitor network traffic between micro services and by providing features like mutual TLS, request retries or request circuit breaking. Vendoring these features from Istio helps keeping micro services focused on the actual application logic as they don't need to be implemented by the micro services. The IT industry has broadly adopted this architecture, but there are still plenty of legacy workloads running in virtual machines, which can't easily take the advantage of the features provided by service mesh. At least not until recently when KubeVirt introduced support for Istio service mesh. Attendees of this talk gain insight into the concept of the Istio sidecar proxy. A short demonstration showing typical use case of Istio service mesh -- canary deployment -- is presented. Next, this talk explains subtle differences of network traffic routing between regular Kubernetes pods and containerized KubeVirt virtual machines, leading to the challenges that these differences pose for traffic proxying. Finally, the changes necessary to support Istio for KubeVirt virtual machines are explained and the resulting functionality presented using the same scenario, but with the workload running in virtual machines instead of Kubernetes Pods. The takeaway of this talk is understanding of routing concepts behind Istio proxy sidecar with regular Kubernetes pods as well as with containerized KubeVirt virtual machines. Audience will have a chance to observe typical use case of Istio with both pods and virtual machines and get insight into the necessary changes that made this possible.
10.5446/57046 (DOI)
Hi, my name is Stefan Heineze and today we're going to talk about what's coming in the Vert.io 1.2 standard. We're going to look at the new virtual I.O. devices that have been added as well as some of the features. So let's quickly recap what Vert.io is. Vert.io is an open standard for virtual I.O. devices. And originally this solved the problem of when virtualization became popular and there were more and more hypervisors and emulators, how could we get those hypervisors and emulators to have good I.O. devices like network cards or disc controllers without each one of them having to implement their own custom device which would require device drivers for every operating system. That's an n times n situation. And so Vert.io is a standard that hypervisors can implement and driver implementers for guest operating systems can implement these devices. And that way you don't need to invent your own devices for every hypervisor and emulator. Nowadays it's also possible to implement Vert.io devices in hardware. So the new specification tends to come out every few years but in between once new features have been merged by the Vert.io community for the next upcoming spec software can already ship with those new features. That's typically how it works. They don't have to wait for the next release in a couple of years. So what that means is some of these devices that I show you today might be devices that you've already heard about or devices you already use because they are in the upcoming spec and Linux for example has already shipped them. Okay let's look at the new features in Vert.io 1.2. The core of Vert.io has a number of device model extensions. So this will be useful for device designers, people who are creating new devices because it gives them more powerful and more flexible ways to design their devices but it's not immediately useful to people using Vert.io devices today. So I'm not going to go into details of how the device model was extended in Vert.io 1.2. But what I do want to mention is some of the new features in existing devices. In Vert.io NET, the popular network card that Vert.io offers, there is UDP segmentation offload support as well as multi-Q receive optimizations. Receive site scaling and per packet hash reporting. They allow multi-Q receive to work better. So that's definitely worth checking out. You may already have it enabled on your machines because that's certainly a feature that's already out there. For Vert.io block, a new secure erase command was added and the purpose of this is that if you just write zeros to a disk in order to overwrite the data, you don't necessarily have a guarantee that the old data that was there before has now really been discarded and is irretrievable. It's possible that if there's some kind of block allocator or something underneath, maybe the disk didn't actually overwrite the data and someone who's determined could still somehow restore it. So the secure erase command helps with that. Then there's the Vert.io balloon device, which has new features that allow to report more memory information from the guest so that the hypervisor can make better decisions. But the real meat of this presentation, this tour of the new Vert.io 1.2 devices is the new device models that have been added to the standard. And that's what we're going to focus on today. There are nine new device types and I think that's really exciting because if you look at the set of device types that were there in Vert.io 1.1, there weren't all that many more than nine. So we've almost doubled the number of devices in the spec. And that's incredible. Some of these device types do things that Vert.io could never do before, like the Vert.io sound card, which now does audio. In addition to these standard devices where there's a full description of how these devices have to work and their interface in the specification, there are also a number of devices that are also a number of additional device IDs that have been reserved for new devices. So some of those will be future standard devices, maybe in 1.3 and so on. Some of them might be application specific. Essentially they're so specific to some particular use case that there might only ever be one implementation of this device in the world. And for that reason, that device might not ever be a standard device in the spec. So let's focus on the standard devices, these nine new devices. Let's go through them and take a look. The first one is Vert.io sound. This is a sound card. It's a general purpose thing. It can handle things like listening to music or movie and game audio, including surround sound. It can do voice calls. So it can do both playback and capture, multiple streams. And it can also do pro audio recording. So this device supports all the different audio sample formats that you would expect for these use cases. And it's built on top of the high definition audio spec, some of the concepts from that spec, which means it should be fairly easy to integrate into existing audio stacks because they typically will already have some of that functionality from HDA. The AUSA driver is available in Linux. It shipped in 5.13, but QMU doesn't have an emulated device yet. So maybe someone will contribute a Vert.io sound device in the future. I think that would be a nice addition to QMU. The next device we're going to check out is the Vert.io IOMMU. So an IOMMU's job is to translate the memory accesses that a device makes. So not from the CPU, but from the device, DMA. And what it does is it enforces memory protection. And this ensures that a device that is maybe a buggy driver programmed the wrong address into the device, and the device is going to try to write to some memory that it shouldn't write to because it got an invalid address. The IOMMU would protect that. So for reliability, it means that you have some level of protection against malfunctioning devices or buggy drivers that have programmed the wrong addresses into the device. On top of that, for security, it's a useful feature because it means you can isolate the device and not give it access to all of guest RAM. You can give it access to just the IO buffers that it really needs to operate on. That way, it can't spy on sensitive data and memory, and it also can't corrupt memory in order to gain control over the guest, maybe by changing some of the pages so that it can plant some instructions that the CPU will execute. The kind of use cases for the VerdiO IOMMU include if you run user space, DPTK networking applications inside your guest, they use the Linux VFIO API, and that typically requires an IOMMU. So this is an area where the VerdiO IOMMU can be used. It's also useful if you're passing a device through to your guest, a physical device from the host to the guest, because now you can isolate it and ensure that it doesn't have full access to all guest RAM. And finally, in nested vert, nested virtualization is very common to use IOMMUs as well because pass through is common in nested vert. One more thing that's worth mentioning about the IOMMU is that the different architectures, the different platforms typically already have their own native IOMMU, so Intel has an IOMMU, ARM has one, and so on. But the VerdiO IOMMU is cross architecture, so this is not specific to just one CPU architecture, which is a nice property to have if you're implementing, for example, your own kernel, maybe if you implement VerdiO IOMMU, you can run more easily on multiple architectures and have less code. So the VerdiO IOMMU has been available in Linux since 5.3, and it's also supported by QMU 5.0. The Levered support is on its way, and I think maybe by the time you watch this presentation, it will already have landed. Next up, we're going to look at the VerdiOFS device. VerdiOFS is a shared file system device, it allows the host to export a directory to the guest. So an entire directory tree, a file system, can be exported to the guest. And there's also optional support for directly accessing the contents of the files from the host page cache and not having to copy them into the guest page cache. That can be advantageous if you have some read-only data that many guests on the same host need to access. Normally, what they would do is they would copy that data from the host into guest RAM into their own page cache, and then each VM would have, since it has its own page cache, you're going to have in copies of this data. But with VerdiOFS's DAX window feature, you can directly access the host page cache and not have to copy it. VerdiOFS is based on the Linux Fuse interface, a user-based file system API, so it gains all those capabilities, all those functionalities that Fuse has. The use cases for VerdiOFS are general things like if I install a VM instead of having to maybe SCP copies some files into the VM, maybe it's easier to just mount the directory from the host and have in-place access to your data from the host. That can be very convenient. Or maybe if you're developing some code and you need to do tests inside VMs, maybe you're developing low-level software like a kernel, or maybe you just do it for the isolation of running in a VM, then instead of copying everything into the guest, you can just mount it using VerdiOFS. And again, it's in place, it's more convenient. And then there are also more specialized use cases like Cata containers and other secure container VMs. They typically need to share files from the host with the sandbox VM where containers execute securely. VerdiOFS has been in Linux since 5.4 and in QMU since 4.2 and in Libvert since 6.2. So this is a feature that has fully landed and is already being used even though the device spec itself technically is only coming out in the next VerdiO release. Okay, the next device we're going to look at is the PMAM device, the VerdiO PMAM. So this is a persistent memory device. And the reason for this device, even though for example QMU already emulates physical NVDIMs, physical persistent memory devices, is that physical NVDIMs, although you can emulate them, they don't necessarily have a hypervisor trapable flush mechanism. So when an application inside the guest stores some data and now wants to ensure that that data is permanently stored on the persistent memory, so even after power failure, even after reboot, even after restarting the application, it wants to make sure the data is really going to be there. If there's no hypervisor trapable mechanism, say you use a cache flush instruction to ensure persistence, if the hypervisor can't trap it, the hypervisor can't do something like an F-sync system call on the host that would ensure the data is persisted. And so that was kind of a gap. So in some configurations, it was not possible to use QMU's NVDIM emulation, but the VerdiO PMAM device solves this because it has a VerdiO flush mechanism. And that can be handled by the hypervisor to ensure that your data is persisted. So again, this is something you would need if maybe you're not using, you're not backing your persistent memory with an actual NVDIM. It's also something you can use in order to bypass the guest page cache if you have a blob of data that many VMs will need to read and you want to reduce your memory footprint, you can just put a VerdiO PMAM device in there and they can access the data from the host page cache instead of having copied in the guest. So that's similar to VerdiFS, except it doesn't give you a directory tree or anything like that. It's not a file system, it's just a blob of memory. VerdiO PMAM has been in Linux since 5.3, so the driver is there, and QMU since 4.1. So Libert also has support since 7.1. All right, so sticking with the theme of memory, there's a new VerdiO memory device, VerdiO PMAM. And this is kind of a successor to VerdiO balloon. And the traditional ballooning device had a few shortcomings and VerdiO PMAM addresses some of them and takes a slightly different approach. The VerdiO PMAM device is all about hot plugging memory. It allows you to plug and unplug memory from your guest. And what's interesting is that you're not limited to DIMM sizes, because if you try to simulate memory hot plug, like you would on bare metal, where you can put a new piece, a new stick of memory into the machine, those DIMMs on some architectures have granularities like 128 megabytes. So you're incapable of adding just a few megabytes or removing just a few megabytes. It has to be 128 megabytes or more. And so what VerdiO PMAM gives you is the ability to do fine-grained memory hot plug. That's especially useful if you want to have some kind of lightweight workload inside a virtual machine where you do want to finally control the amount of memory that's available to your workload. What's also interesting about VerdiO PMAM is that it has NUMA support. So you can indicate which NUMA node you want to plug memory from. And so that's different. For example, VerdiO Balloon didn't support that. The driver has been available since Linux 5.8, and QMU support is in 5.1. And Libvert also has support for it in 7.9. So those were kind of general devices that you would have in servers or desktop virtualization. This next device is a more specialized device. This is the new replay protected memory block device. It's a special type of storage device that is tamper resistant. It's special purpose in the sense that, for example, it can only, the maximum capacity of the device as specified today is only 16 megabytes. So this is not a general purpose disk that you would use. It's very specialized. But what it does is, when the device is initialized, it is assigned an authentication key. That's essentially a shared secret. And then whenever the device is written, the data being written and the shared secret are HMAC together, and that way the rights can be authenticated. Can only be written by someone who has the shared secret. And in addition to that, there's a right counter in the device, and it's a saturating counter. So it doesn't overflow. It doesn't wrap back to zero. And it effectively places a limit on the number of updates. So this is kind of something you would use if you have system data, something that's very critical, or maybe firmware or some configuration data, small amounts of data that need to be safely stored in a way that they can't be tampered with. Another scenario might be if you have an application with very sensitive data that runs in a trusted execution environment, for example, there it could hold the authentication key, the shared secret, and it could interact with the RPMB device with the knowledge that the data can't be tampered with. There are no Linux drivers for this yet. There's no QM implementation for it yet, but the device is in the Verdi O1.2 spec. So it's possible to implement a compliant device or a driver. Okay, the next thing we're going to look at is the Verdi O SCMI device. What this is is a transport for arms system control and management interface. This is an interface that allows arm systems to control the power and performance of their devices. One example might be you might be able to turn down the performance profile of a component so that it's running slower, but using less power if you're trying to do power saving, for example. And what Verdi O SCMI is, it's a transport for these messages. What that means is that you can use the SCMI tools, the management tools, and the kernel driver stack inside of YAM. Previously they were only available for bare metal, so that's kind of a nice way to reuse this whole concept of SCMI that exists on bare metal for virtual machines. And the driver is available in Linux 5.15. Okay, the next device we'll look at is the Verdi O I2C device. This is definitely an interesting device if you're doing embedded or IoT. Servers and laptops will have I2C in there somewhere, but it tends to be quite hidden and not play as big of a role. But I2C targets can be things like EEP ROMs, clocks, sensors, or small displays. And with the Verdi O I2C adapter, it's basically a bus controller and it can talk to one or more of these I2C targets. These targets can either be emulated devices, for example, QMU could emulate an EEP ROM or a clock, but they can also be passed through devices. So real physical I2C devices can now be passed into a guest using Verdi O I2C. And the I2C driver for Linux is already in 5.15, and it's part of the Linux I2C framework, which means you can use the usual IOCTLs and APIs to talk through a Verdi O I2C device. The final device we're going to look at, the final new one is Verdi O GPIO. So this is a general purpose input output device. It allows for control over lines and the electrical signal values that they have. For example, if you have an LED, you could set an output line to that LED to the value of 1 to turn on the LED light or 0 to turn off the light. So that's a simple example. Another thing is that it supports not just output, it supports input. So if you had a physical button on your host that has a GPIO input line, then you could pass that through to the guest using Verdi O GPIO. And then the guest software could read that line to find out whether the button is pressed or not. Now, Verdi O GPIO also has interrupt support. So that line can be put into interrupt mode so that the guest will receive notifications when the button is pressed, when it's changed. It doesn't have to always pull that value. So the use case for this is going to be for interfacing external devices and maybe for slow logic signals from VMs. The driver is already part of Linux. It's in 5.15. It's part of the GPIO framework. So all those IOC-Tails and APIs for GPIO are available through this driver. There's no QMU implementation of this device yet. So I hope this was interesting. And maybe you found out about a new device type that might come in handy. New Verdi O devices are being designed as people come up with use cases for it. And once the device is there and we have drivers, then it's so much easier for everyone else who comes later to reuse those devices or just make small feature extensions to them rather than inventing their own thing. If you want to write a driver or if you want to implement a device, please check out the spec. There's a GitHub repo that I've linked on this slide. And you're welcome to contribute, to ask questions. Yeah, thank you very much.
The VIRTIO standard defines I/O devices that are commonly used in virtual machines today. The last version of the standard was released in 2019 and much has changed since then. This presentation covers new devices and features in the upcoming VIRTIO 1.2 standard. There are 9 new device types: fs, rpmb, iommu, sound, mem, i2c, scmi, gpio, and pmem. We will look at the functionality offered by these devices and their status in Linux. This presentation is aimed at users of virtualization who may be interested in new virtual devices that are becoming available in Linux, QEMU, etc. It may also be of interest to driver and virtual machine monitor developers who are considering implementing new devices.
10.5446/57048 (DOI)
Okay, welcome. We are here to talk about hyper-hyperspace, which is a library to create peer-to-peer applications. The idea is most of our devices have lots of memory, lots of storage space. They have good CPUs that can do all the computations that most of our applications need. And more importantly, most of our devices have web browsers. So the idea of this library is using web browsers as kind of a universal virtual machine where we can run peer-to-peer applications virtually everywhere. And what we're trying to do is to create data types that are very, very, very easy to replicate. So when using hyper-hyperspace, we will never do an RPC call or make a call to another machine to try to get that machine to do something for us. The way we do things in hyper-hyperspace is that we always replicate whatever information we need locally. And then we work on this information on the device where our application is running. And then whatever changes we make will be replicated back to the rest of the people using this application, to their devices. So ideally, we will make one data type that will represent the entire state of our application. And we will give this data type to hyper-hyperspace and it will keep it synchronized with whoever else is using the application. So essentially, we will be creating a new layer, our application, I mean, in a client-server application, we would deal, this layer would not exist, this information mesh layer. And our application will be dealing directly with network abstractions like hosts and connections and maybe HTTP verbs like post and get. However, in this model where we have this information mesh between our applications and the network, we will be using high-labor abstractions. We will have a store, the store will be local to each device where we will save objects. These objects will be created using hyper-hyperspace modeling library that we will see in a bit. And then there will be this sync component that we will configure using something we call peer groups, which is the group of devices that must keep some of the objects we create synchronized. So let's take a look at how that would work. So basically, this store will just store typed immutable objects, which are shown in this table in the slide. This is, for example, if you go to a web page that is powered by hyper-hyperspace and fire up the console, you will see an indexed DB database with things that look like this, where you will have JavaScript literals that contain object contents, and then you will have the hashes of these objects and then one tag that indicates which type each object has. And then when we go to our data model, we will see that hyper-hyperspace exposes a base class called hashes object, and all the objects that we want to store and that we want to share must derive from this base class. And this class will give us two things. It will give us a consistent way of hashing the contents of the object, and it will give us a way to transform the object contents into JavaScript literal. And it will ask us to do one thing, that is to create a validate function, because later on when we are synchronizing, we will receive instances of this object over the network from untrusted peers, so this validate function will enable the library to check whether the information we are receiving is valid. Finally, one observation. If one object references another, for example, here we have a message object that has an author, which is another object, which is of type identity. This reference that in the running version of our program will be a memory address, will be replaced by a hash-based reference. The hash of the identity, which would be another immutable object, will be used to internalize this reference inside the store. So we will just create these objects as we usually do, using new and allocating memory in our running program, and then we will call a method called save that HyperSpaces store will provide, and this will automatically create the entries, this immutable entries inside our applications store. Again, if we are in a browser, this will be indexedDB, and if we are running in a server, this will be some database, probably SQLite, which is the one that we support currently. Of course, this presents some challenges. For example, if we have mutable information, in this example, the author is trying to change the contents of his message, so he replaces the text that used to say hi for hello. Maybe he wanted to be a bit more formal, I don't know. But this doesn't work, because when you save this to the store, the contents of the object change, so you get a new hash. So there's no relationship between the message that you saved before and the message that you have now. So this, as a way of mutating information, this will not work. So we need to come up with something else. So what HyperHyperSpace does is using something called operation-based CRDTs. CRDTs are very well-documented data types using these kinds of settings where you have several devices modifying the same data types. Here's an example with a set. So we create a set using new and save it in the store, and this will create an object just with the mutable ones we were looking at and create a hash value. There's a technicality here. All this, even if this is a mutable object, it will be generated in a known initial state, in this case the empty set. Of course, if we hash the empty set, we will always get the same hash number, so there will be a single set across our whole universe. So the system adds a random seed to each new set to get a different one. And then as we make changes to the set, in this case we are adding apple and orange, and then saving the set to the store, this will create new objects, new mutable objects that are the operations are adding this set. And the operations are just like any other object that we have been creating, and they have references that are also hash-based, indicating which is the set that they want to modify. So here's, following on with the set example, we say we want to delete the apple, so the essence of this CRDT data types, which means conflict-free replicated data type, is that no matter in which order these operations arrive, because different peers in the network may receive the operations in different order, the result must always be the same. So in this case, this is what is called an observe-remove-set, and when we delete the apple, if you look at the operation object that is being generated, you will see that it doesn't delete the apple, but it actually points to the addition operation that was adding the apple. So if someone else added the apple at the same time, we were deleting it, there would be no ambiguity about whether we are deleting that addition or not. I mean, either we received it and we will create a delete op for that one, or we didn't. So eventually, when everybody receives these operations, the result will be the same for everyone. This is very important. So here, okay, just restating that this is a commutative property between all the operations that type supports, and hyper-hyperspace provides some types you would expect, for example, immutable set, immutable reference, but if you want to create other mutable types, there's a mutable object and you can derive that and define your own operations, and as long as they respect this commutativity rule, everything will be fine. So moving on, here we have, say, we want to create a chat group, and we want to create a data type that will support this chat group through hyper-hyperspace. So this is, maybe this is a simplified version, but it will be something like this. We have an owner that created the chat group, and then we will have a set of moderators that are not responsible for enforcing that whatever rules, you know, rules of conduct this group has are being respected by everyone, and then we have the set of members, which are the participants of the chat, and then we have the set of messages, right? Maybe we could be more ambitious and we could try to do away with the owner and have some kind of voting, but just to keep it simple, let's imagine it works like this. Then we have some rules that we want to enforce through our data type, for example, that only members can post messages, that the moderators are designed by the owner, and that members can delete their own messages, while moderators can delete other people's messages, okay? And if we imagine this, for example, this later rule, it can generate situations where commutativity would be broken. For example, say we have a moderator that's called Alice, and at the same time, in two different peers, she's removed from the set of moderators, and concurrently, she uses her moderator rights to, say, remove a message from Bob, right? So depending on which one of these messages you receive first, the deletion of Bob's message would be valid or not, because if it comes after Alice is removed from the moderator set, Alice would no longer be a moderator, and she wouldn't be able to do this. So Hyper Hyperspace adds some more constructs to deal with these kinds of situations, and we have an ocean of operation invalidation. So we can say that operation, we've seen a bit, can become invalid, and then that we have explicit causal dependencies between operations. You can see that one operation is dependent on another. And we have some special data types to use these properties. One is called, for example, a causal set, which instead of just addition, and deletion has an additional operation that's called a test. And a test will do is that it will say that at a given point in time, one element is a member of that set. And there's an after risk in the delete because the deletion in a causal set is a bit more sophisticated, and it will mark the point in the causal history where the deletion happened in the local store of the peer that is creating the delete operation. This is kind of complex, just bear with me. So when Alice creates her deletion up to remove Bob's message because she found it offensive or it was against the rules or whatever, she will create a causal dependency. First of all, she will attest that she belongs to the set of moderators. And then she will, so these are two different operations. The attestation is an operation, and then the deletion of Bob's message will be another operation. And this operation will have properties saying that it depends on the attestation being valid. And now, since the deletion that Alice, the delete operation that removed Alice from that set has marked a position in the history, in, it's the local history of that peer, but either the attestation got there before he or she deleted Alice or not. And that will be the same for everybody because this picture of his local history will be shared alongside the delete operation. So everybody will agree on whether this attestation is valid or not. And if it is valid, then the deletion of Bob's message will go through. And if not, the attestation will be undone. And since we know that there is a causal relationship between the deletion of Bob's message and do, we'll cascade and that will be undone again. So this gives you a feeling about how we can compose CRDTs to create more sophisticated rules using the hyperspace engine. So just to recap, our immutable data objects are hashed and are literalized for sharing. And we need to provide a valid function so when they are replicated, the engine is able to, the sync engine is able to know if what you are getting is right or wrong. We will use operational CRDTs to get mutable types. And finally, invariance will be modeled if necessary using explicit causal relationships and invalidation. And of course everything will be persisted as immutable type objects and we will use hashes for referencing and for the identity of these objects. So this sums up our data model and I want to quickly go through how sync works. So basically sync each dot in this drawing is a peer, which would usually be a different device. And peers in hyperspace are a pair, they are a network endpoint and an identity. The identity usually will have a keeper inside. And then we will have a second concept, this is a peer group, which is how each device will initialize this mesh network. And this peer group is also a pair. One is a local peer indicating who we are in this mesh network. And the second one is called a peer source and its role, it has basically two functions. One is giving us random peers to connect so we can get some neighbors in this mesh network. And the second one is if we receive a connection request from a potential peer, it tells us whether we should accept it or reject it. So once this mesh network is configured, it will basically do two things. First, it will gossip about the state of all the mutable objects. Here's an example, someone found a new ad op and it's sharing this with the rest of the network. And the second one is once through gossip, we find out that we are behind some, that we are missing some updates. There will be a sync. This is not how the sync actually works, this is just for show. So basically, yeah, you will see that you need an object that hashes to something and someone will send you a JavaScript literal, you will hash it, you will see if the hash is right, you will probably see that you are missing some references and you will have their hashes and you will ask for more literals. Eventually, you will be able to reconstruct the object and then you will be able to call the validate function to determine whether your node should receive it or not. It's not like this. Actually, it's more like Git. What Git does when you pull or push stuff on Git, it's more like you get headers for all history and it finds the places where it forks and there are optimized things. But you get a feeling. It's something like this. And finally, there's one more thing because, I mean, we got a lot of objects in our model but usually there is one root object. In our example, it was a chat group object that had all the containers inside. So usually, something called a space is generated from this root object and spaces can be broadcasted and looked up in this mesh networks. So that's how we, for example, how we would join chat group usually in hyper-hyperspace. And this is a bit like a file. Space is a bit like a file but designed for working on a network setting. And well, you can broadcast and synchronize later all the objects that are within your space. This is usually part of what the chat group would do automatically for you. I mean, whoever designed the chat group will design its space and how its synchronization should work. Okay, so we're basically done. Now, we got an idea about how hyper-hyperspace application would work. We will define some data types and we will configure a mesh network. That should usually be like set and forget. We will just do that at the beginning. And then we will work on this local store. We have save operation, load operation, where we give the store a hash and it will give us back an object. And then we have a watch operation for the mutable objects so we can ask the store to keep one of our objects, the objects we have in our running program up to date. And since the store is being kept synchronized by hyper-hyperspace, just by keeping our version in sync with the store, we will have the latest version. And well, then there are other things that I know there are react bindings so you can feed these objects directly into the state of your UI and things like that. But that's basically it. I just wanted to give you a feel of how this works. And okay, there will be a Q&A session in a bit. Here are some links about hyper-hyperspace website, the core library that does all the sync. And there's a link to the chat group example we have been discussed so you can look at the actual code. And I think there's also two more things. A Discord server in there that you can join if you want to discuss things. And there's a white paper discussing the things I have glossed over in greater detail. Okay, so thank you very much. See you in the Q&A session.
A quick tour on using Hyper Hyper Space datatypes to create fully distributed structures for collaborative applications. We'll start with a simple last-writer-wins JSON object, to complex structures like a moderated chat room and a simple ledger. Then we'll open them inside a web browser, using plain javascript, and show how the web can be used as a platform for truly peer-to-peer applications. The Hyper Hyper Space library offers two things: A data modeling language, that presents building blocks to create and compose CRDT-like operational datatypes, that are backed by a Merkle-DAG. A synchronization protocol for Hyper Hyper Space datatypes, with an implementation that works both in-browser (through WebRTC & IndexedDB) and NodeJS. In this talk we'll focus on how to use the modeling language, and see in practice how to use the created objects inside a web browser using JavaScript.
10.5446/57049 (DOI)
Hello everyone and thank you for having me here. My name is Alfonso de la Rata and I'm a research engineer at Protocol Labs and I'm here to talk about file sharing in peer-to-peer networks. Actually I'm going to talk about all of the research we've been doing at Protocol Labs research to try and improve file sharing and drive speed-ups in file sharing in peer-to-peer networks. So it is well known that file sharing and file exchange in peer-to-peer networks is hard because you have to worry about content discovery, content resolution, about the content delivery and there are a lot of notes in the network that potentially can have that content. And doing so without any central point of coordination, it's even harder because out there there are a lot of gamut of content routing system that help in this quest of trying to find the note that is storing the content we are looking for. Like for instance, BitTorrent had BitTorrent trackers in order to discover notes that store the content. In the Web 2.0 we see the DNS as the perfect system helping us to find the server that has the resource we are looking for. And in peer-to-peer networks we usually use a data structure and we organize content in a DHT in order to be able to find the notes that store the content we are looking for. The problem is that all of these content routing systems have their own trade-offs. So for instance BitTorrent trackers are, it's a centrally governed system, the same happens for the DNS, they are fast but they are centrally governed. And then we have the DHT that is like the big system in peer-to-peer networks, the peak content routing system in peer-to-peer networks. But the problem is that when the network is large and the system starts to scale, the DHT is pretty slow. So in order to overcome all of these trade-offs in content-routed systems, we came up with BitSwap. In the end, BitSwap is a message-oriented protocol that complements a provider system or content-routed system in the discovery and exchange of content in a distributed network. BitSwap is already deployed and it's already used in IPFS as the exchange interface and in blockchain as, sorry, in Falcoing as the Falcoing's blockchain optimization protocol. And BitSwap has a modular architecture that is really similar. In the end, BitSwap exposes a simple interface with two operations, a get operation and a put operation. The get operation is the one responsible for saying BitSwap that you want to find content in the network and download the content in the network. And then we have the put command that what it does is to store content in the network. So we will say, hey, BitSwap, this is the blog or this is the content or the file that we want to store in the network. And we see that the models in which BitSwap is comprised are the following. First we have a connection manager that leverages a network interface in order to communicate with other nodes in the network and exchange messages with other nodes in the network. Then we have the ledger. The ledger is to track, so whenever other BitSwap nodes send requests to our nodes, our ledger is to track all of the requests being made by other nodes. So in this way, we know what others are asking for and if we have that content in our block store, we will be able to send it back to them. And then we have the session manager that is the one responsible when we trigger a get operation is the one responsible for sending new sessions and orchestrating all the messages that will allow us to discover the content using BitSwap and then transfer the content or download the content from other nodes using BitSwap. The session manager leverages a content routing interface in order to communicate with another provider, I mean with the providing system of the content routing system that there may be in the network. So in the case of the example that we will be following throughout all of this presentation, which is IPFS, BitSwap complements the DHT as the content routing subsystem of the network. But BitSwap would be able to work with other content routing systems, for instance, DNS or a network database, and it would even be able to work in isolation without the help or the aid of another counter routing subsystem. We'll see in a moment how BitSwap works and why I'm saying this. But before we start with the operation of BitSwap, let's understand how BitSwap understands content and how it finds content and manages content within the network. So in BitSwap and also in IPFS, content is chunked in blocks. So we have, for instance, this file, this file will be chunked in different blocks that are uniquely identified through a content identifier or CAD. The CAD in the end is just a hash of the content of that block. And it's a way of being able to identify uniquely these blocks of a file or because of the duplication, these blocks can belong to more than one file as long as they have the same content. And these blocks are usually linked one to another in a DAG structure, like the following. And this DAG structure can represent a lot of things. From a file, like, for instance, a file with a lot of blocks that are the CAD route, it stores links for the rest of the blocks that comprise the file or it can represent a full file system. This will be the case, like, if, for instance, we have a directory, this would be the root of the DAG structure would be the directory that would have all the links for the files in the directory. In this case, we can have, for instance, like three files, and each of these files can be comprised of two blocks. So all of these items will have a CAD and here we will have, like, the CAD route with links to the files and the files with links to the blocks that comprise the files. And this is how BitSwap understands content or content in the network and interacts with content in the network. And I think that is worth noting before we start with the operation of BitSwap is the common request patterns that are used when finding content in a peer-to-peer network and specifically in IPFS. Usually we can find a common request pattern and the importance of knowing a common request pattern is that BitSwap will behave differently according to the request pattern used. So for instance, a common request pattern is when we are trying to find, we're trying to fetch a data set or a full file. For instance, let's consider this a data set with a lot of files where this is the name of the data set, there are some of the files in the data set and each of the files in the data set is conformed by a different number of blocks. So in this case, what BitSwap will do is first, so we say, hey, I want to get these data sets using the interface that we've seen, the get command that we've seen in the BitSwap interface. What BitSwap will do is to first gather the root, the CID root of the DAG structure and once it gets this block, it will inspect this block and check the links for the next level. It will get these blocks and once it gets these blocks, it will have knowledge about the blocks and through the links of these blocks, it will get knowledge about the blocks in the next level. And in this way, BitSwap will start traversing the DAG structure and gathering all the blocks in the DAG structure. This DAG structure could be as deep as we want it to be. This is one of the common requests patterns, but we have another one that is also really common that is the one we would use when we want to download the assets to render a website. So imagine that we have this directory that stores a website and we want to render the web page, so this part, web page.html. In this case, what BitSwap will do is, the first thing it will do is to get the CID root, the root of the DAG structure and it will traverse one by one the path instead of like going level by level trying to gather all of the blocks of the level as it was the case in this request pattern. It will take the CID root, see the links in the CID root, follow the one it is interested in, which is page. Once it gets the block for page, it will get the links for this block and go to the one for doc.html and once it gets to doc.html, this is the content that it actually wants, the site that it wants to, or the file that it wants to render, it will take, read all the links in the block and get all the blocks for the level of the doc.html. These are two of the common request patterns used when fetching content in like in EpiPest and with BitSwap and this would be the flow that BitSwap would follow. So as I've said, BitSwap is the exchange interface in EpiPest. To understand the operation of BitSwap, let's see how it would work when fetching a file in EpiPest. As I've said, BitSwap is a message-oriented protocol so we will see six different messages in BitSwap, three requests that are the one-half, one block and cancel and three responses, the half, the block and the don't have. So when we want to fetch a file from the EpiPest network using IPFS, what EpiPest does is it first checks if that file is in its block store. So it checks, imagine that I want to download the doc.html that we've seen before. The first thing that it does is to check if the blocks for that file are in the block store. If this is not the case, EpiPest triggers a get operation in BitSwap and triggers a new session that starts looking for all of the blocks for that doc.html file that we've seen above. So the first thing that a BitSwap session does is to broadcast a want message to all of the connected peers for the BitSwap node. So this want message, what it does is it's one-half message that is saying, hey, from all my connections, please let me know if any of you have the block for this CID. So in this broadcast, what we are trying to find is the CID route for the content that we're looking for. So in this case, we would be looking, if we're trying to find a full file, we would be looking for this specific block. It would be the block CID1 that we try to find in the broadcast stage. And for the doc.html, the same, we will try to find this CID route that will give us information about the links in order to get into the doc.html. So we sent to all of our connections a request to check if any of them have the block for this CID. And in parallel, what BitSwap will do is to make a request to any of the available providing subsystems in the network, in the case of IPFS is the DHT for that CID. So in case my current connections, none of them have the content, I have a way of knowing who in the network is storing this content. And in this case, the CID route for the CID route block for the content that I'm looking. So according to if these nodes have the content or not, they will answer with either a half message saying, hey, I have this content, or with a don't have a message saying, I don't have this content. But also the DHT, through the DHT, we will have knowledge about if the, I mean, about a node that I may not be connected to that also has the content. The session, the BitSwap session, what it does when it receives these responses is to add all of the peers that have responded successfully to this request into the peers of the session. So in the subsequent interactions for the discovery and exchange of content, instead of asking all the nodes, all the connect nodes, the session will only ask to the ones that have answered successfully to this request. In this case, these three nodes. This is the view from the peer that is requesting content for the network. But what happens, what is the view that a peer that receives this request has? So what it happens, imagine this broadcast from Pirae that is receiving. So if we have Pirae that is sending a one CIT for, a one for CIT one, a one for CIT two to peer B, what peer B will do is, according to the request that is receiving from Pirae, update its one list in the ledger. We've seen that the ledger is this model used to keep track of what CITs, what blocks, other nodes are looking for. So in this case, peer B will keep in its ledger information about the blocks that A is looking for. So in this case, peer B may not have the blocks and send a don't have to peer A saying, hey, I don't have it, but peer B will remember, will still remember the blocks being requested by peer A in order to, in case it receives the block by chance by any other channel at any time and peer B sees that in the ledger, in A's ledger, it has, A is still looking for CIT one, what it would do is like, if you see the blocks and directly, like immediately forward it to peer A, so it has. And once it sends this block to peer A, as peer A already has the content, it can be removed from the ledger. So this is the view from the peers that are receiving requests from other nodes in the network. And what is the flow, because here we've seen, I mean, we do this broadcast in order to see who's storing the CID route for the content I am looking for. But what is the process from the discovery to the actual download or exchange of the block? So we've seen that here, peer A will send one have 12 of its connections, peer B, peer C and peer D, and they will answer according to if they have it or not have it. So the one have all of them have the file and they will answer with a half message and they will be added in the session. As the first response that is received from peer A is from peer B, peer B what it will do is say, okay, if peer B has the content, I will directly ask for its exchange. So peer A sends a one block, in a one block what we're saying is, hey, please send me this block, I already know that you have it, so please send me this one block. And peer B will answer with the block for the CID route of the content we are looking for. In this case, peer C and peer D may answer afterwards to the request, but what happens is that peer A won't ask for that block, but it will keep the knowledge that these two peer C and peer D, they potentially have the rest of the deck because they have the CID route. And from there on, so as we've seen, once we get the CID route, we can get knowledge about more CIDs in the DAX structure because we inspect the links and we know what to ask for. And from there on, like as the three of them are inside the session because they've answered successfully to this broadcast message, we can start like asking for more blocks in the next level of the DAX. So here, for instance, to peer B, we can send directly because we have knowledge that potentially it has already, I mean, at least has the CID route, it potentially has the rest of the levels of the DAX structure. So I can instead of like sending one half and one block and having to go back and forth, I can send directly some one blocks and the rest one halves. In this case, in order to have a multiple path of exchanges, PWA sends a non-overlapping request of one blocks to all of the peers in the session. So in this way, we are like spraying the request and trying to get the content. If at one point, so another thing that is worth noting is that Bitswap messages, inside Bitswap messages, we can put more than one request because inside the envelope, we can have a list, a one list of requested CIDs. So in this case, for instance, if let's take the exchange of PWA with peer B as an example, here we're sending for these three CIDs, a one half and for the rest, sorry, a one block and for the rest of one half. And we see that according to if it has the block or not, it will answer with the blocks to the one blocks with halves to the one halves and with don't halves to the blocks that it doesn't have. Either it's a one block or a one half, if peer B doesn't have these blocks, it will answer with a don't have. And this back and forth of request of one halves and one blocks is followed over and over again until the DAG structure. So once we have the CID route, we can start getting all the levels until we get all the blocks for the content we were looking for. But what happens at one point because here we are only communicating with the notes inside the session. So the ones that have successfully answered to the CID route to this broadcast and also like the ones that I may have found through the DHT request or the content routing system query. So what happens if at one point I keep receiving don't halves for all the blocks I'm asking for in the peers of the session. So imagine that after sending this request, peer B says that it doesn't have any of these blocks. In this case, peer A will remove peer B from its session and say, hey, I'm not going to ask this guy again because he doesn't seem to have the rest of the blocks of the DAG I'm looking for. And this may be the case. In some cases, we may have peers that store only the top levels of the DAG structure and not all of the DAG structure that you may be looking for. So in this case, as peer B doesn't show, I mean, it seems not to have the blocks I'm looking for anymore. It's removed from the session. What happens if all the peers in a bit of subsessions are removed? Well, we have to do another discovery, another broadcast stage in which we communicate, we search to the providing subsystem to try and populate with potential nodes storing the content. And also we, again, broadcast all of our connections just to check if any of them may have gathered those blocks in the time that I was trying to interact with only the nodes in my session, or maybe I have new connections that they already have the content. So in this case, in this broadcast, another thing that we have to bear in mind is that in this case, the broadcast, imagine that we got into this level, and after this level, all of the peers in the session don't have the rest of the blocks. So something that we have to bear in mind is that this broadcast, instead of doing the broadcast for this, say, we already have these two levels, we start the broadcast for the nodes where we ran out of peers in the session. So it's a way of populating the session again with candidates that potentially store the content, and in order to restart the download of the rest of the blocks that we were planning for the specific content. And finally, what happens if, for instance, PUA gets a block because PUA is communicating with a lot of nodes at the same time? We may have a lot of nodes in our peer session. So every time that PUA receives a block from, maybe in this case, let's consider PUA interacting with PUA as one of the peers in the session. If at one point it receives the block from another peer that is not PUA, PUA will send a cancel message to all of the nodes in the session so that, I mean, in order to notify them that it is not looking anymore for that CID and for all the nodes in the session to remove the CID1 from ACE ledger. So from there on, even if PUA receives the CID1, it won't forward it to PUA because now it knows that PUA has found the block from other peer in the network. So this is basically how BitSwap works. We did an extensive evaluation of trying to compare BitSwap against, for instance, the DHT. So in this case, we made some tests in which we had a peer network, an IP address network, where in order to find the content, in order to find a block in the network, you had to resort to a query in the DHT and we compared it with BitSwap in which, I mean, we had the CIDR within the connections of, so in this test, what we had is like 20 nodes in which we had a lot of leeches, like 19 leeches and just one CIDR. In the case of the DHT, in order to find the block, you had to search through the DHT and in this case, as all the nodes were connected one to another, the CIDR is connected directly to the leecher and BitSwap does like this back and forth of one-half-one blocks to discover the content and exchange the content. And what we realized is that BitSwap is always faster than the DHT to find content in the network as long as any of the neighbors, any of the neighbors of the leecher, I mean, as long as any neighbor of the leecher has the content. Then we did another test in order to see how BitSwap and the DHT behaved when the number of nodes increased in the network. I mean, this is not a really meaningful result because we are talking about a dozen nodes and the real impact of using BitSwap compared to the DHT will be seen when we have more nodes. But we see that the more nodes there are in the network, the slower is the DHT lookup and in this case, I mean, BitSwap may have a bit more overhead to find like with all of these broadcasts and so on to find the CIDR that stores the content, but once it is found, the exchange of the block is straightforward. Of course, there is something to bear in mind here. BitSwap is really fast as long as one of the connections of the BitSwap node stores the block. If this is not the case, the DHT has 100% probability that it will find as long as the node, the content is still storing the network, it will find a node storing the content. This is not the case in BitSwap where if we don't use any content-related system, it won't find, if the content is not in any connection of the node, it won't find the content. But that's why we use BitSwap as a complement to the DHT because imagine that the first block, the CIDR, because for instance, it may be the case that for the content we are trying to look for, none of our current neighbors has the CIDR or the content we're looking for. So then is when we resort to the DHT or any other content routing subsystem to find the CID route. Once we find the CID route, what we do is to add this peer to the session and we establish a connection with it. From there on, we will interact directly with this peer and there will be no need to resort to the DHT. So from there on, there is no DHT lookup but we will leverage the connection that we already established in BitSwap while searching for the CID route for the rest of the backup for to find the rest of the blocks in the network. So that's why BitSwap is so interesting to be used as a complement to other content routing subsystems as a DHT. But this is the baseline operation of BitSwap. And one thing we realized what we were doing all of these experiments is that BitSwap has issues. I mean, it's not perfect. And this is how, I mean, after this realization is how the beyond swapping this project, research project started because we realized that BitSwap currently is a one size fits all implementation and it may not suit every use case and every kind of data because we may have, for instance, applications that they want to be really fast in the time to first block, but we may have applications that need to exchange a lot of data. So there are a lot of great gamut of applications and BitSwap doesn't have a way of being configured in order to be fined for any of these applications. And we realized that the current search or discovery of content that BitSwap does is blind and deterministic. So it doesn't worry about what have happened before in the network. If we see this broadcast stage in the BitSwap protocol, it sends a one half to all of the connections and it doesn't care about what happened before in other sessions or in other requests for content or whatever other events in the network. It just broadcasts everyone and tries to gather information about who has the content. And we started realizing that maybe we can do this search smarter, leveraging the information that is out there in the network, in other protocols and in the actual interaction of BitSwap with other nodes in the network from previous interactions. And also we realized that BitSwap requests are pretty dumb and dumb in the sense that they are plain requests where we are requesting a list of CITs, a one-list. And instead of this, we could, as we have defined structures and the DAG structures, we could think about more complex requests where instead of asking for the blogs one by one and having to go back and forth in order to discover the links for the rest of the levels of the DAG, maybe we can find a way in which we can perform queries in which instead of saying, give me this CID and this block for this ID, we could say, give me this full DAG structure or this branch from the DAG structure and make a complex query where implicit in the query, we have the array of blocks that we are looking for and the list of blocks we are looking for instead of having to find out by ourselves the blocks that I am looking for. And finally, of course, we could make BitSwap more efficient in the use of bandwidth. And this is how, like with these realizations, is how the BeyondSwap in this project, this is a project started. This is an ongoing work. Here in this repo, you will find all the information. And I highly recommend going there and checking out because there are a lot of ideas and prototypes that we are searching and we are inviting everyone to contribute in order to make, we also have the testbed where we are doing all the tests and we are inviting everyone to join our request. But to give you a glimpse of what we have done so far, we have already prototyped three of the RFCs that have been discussed in that repo. We have explored the use of compression at the network interface. We will see in a moment. We have explored the use, like the gathering of information about what is happening in the network to make more efficient search of content. And we have added a new model that is the relay manager in order to increase the range of discovery of BitSwap messages. And we started with compression. So we started thinking, okay, HTTP already uses compression in order to download data from the web. And if HTTP does it, why aren't we using it in order to make a more efficient use of bandwidth? And we tried three strategies. We explored the use of just like the same way that in HTTP, you can compress the body. We said what if we compress the blocks? And what happened is that in the end, we had some savings in bandwidth, but there was an overhead from having to block per block compress all the blocks included in our BitSwap messages. So then we said, what if we use full compression? Every single BitSwap messages is compressed. Again, we saw kind of the same behavior. But then we realized, okay, and what if we go to the network interface that in the case of the network protocol that in the case of IPFS and BitSwap, the BitSwap implementation in IPFS is leap year to peer. And we implement compression, string compression at a protocol level. And that's what we did. We explored the use of compression at a protocol level for BitSwap and for leap year to peer. And what we managed to do is that with smaller overhead than in the above schemes, we managed to, for certain data sets, to have up to a 70% on bandwidth savings. So this was our first win. And actually you can find, like I'm adding here some URLs where in the block of PL research, you will be able to find all of our contributions and all of them. Well, I mean, we've been documenting all the work that we've been doing around the BitSwap project. So once we had compression going on, we said, okay, we saw that we can leverage information from previous interaction in the network in order to make better discoveries of content using BitSwap. So the next thing that we implemented was a one message inspection in BitSwap. What we did is that we said, okay, if a node is requesting content, it may potentially be storing it in the future. So instead of having to broadcast everyone when we try to find content, if any of the nodes, any of my connections have requested for that CID before, then instead of asking everyone, let's go and ask directly for the blog to the one that have requested it before. So what we implemented is a one message inspection in which BitSwap nodes will inspect the requests from other nodes and for each CID, half a list of nodes of the 10 top nodes that have requested recently that CID. So that instead of broadcasting everyone, I will ask, send a one blog directly to the node that have requested that CID recently. So if we go to this architecture, it's similar, the PIPLock registry is similar to the ledger. Or in the ledger, like whenever a peer has found that blog, it sends a cancel and then we remove that blog. In this case, we are also tracking the requests from other nodes, but in this case, we are uploading it live in order to have knowledge about the node, the peer that has more recently requested for that CID. So in the discovery phase, instead of like sending one half for everyone, we just send a one blog to the top peer that have requested the file recently. We did some tests and we explored, we did some experiments in which we had like 30 nodes where we had just one CIDR and a lot of leachers trying to find content, but these leachers came on waves. So what happened is that the more, of course, in the baseline, the more nodes that had the content, the easier that it was for the nodes to find the content, but if we go to the one inspection improvement to the prototype, what we see is that we reduce, even when a lot of nodes have the content and the time to fetch a blog stabilizes, what we see is that we reduce in one RTT the time to request a blog because instead of having to send a one half to everyone and then send a one blog to get the blog, here what we're doing is that we're directly sending a one blog to the guy that we know that have requested recently the content and potentially has it. What happens if this guy doesn't have the content? It doesn't matter. Like we start, we lost an RTT and we start over again, like we do all the traditional one blog, one half with the discovery that baseline bits have used. And another interesting consequence of this prototype is that we significantly reduced the number of messages exchanged between nodes because instead of having to send this one half, even if you know already who has the content and all these one half, one box and so on, if you have in the peer block registry an entry for that CID, you directly send a one blog to that guy. So you're reducing in the number one blocks that you have to send and in the number of back and forth that you have to do with other nodes in the network. So another big win for BitSwap and the next thing, like we went one step further and we said, okay, the problem is that if one, if our nodes, any of our neighbors has the node, the block that we're looking for, then we have to research, research to the content routing system. But if we add a TTL to BitSwap messages so that these two BitSwap messages can't jump, that means that even if the neighbors of our neighbor has the content, I don't have to resort to the content routing subsystem to find the content and I can go directly, like I can use this, my neighbor as a relay to find the content that is in my neighbor's neighbor. So here we see how this would work. I mean, if PRA right now sends a watch message to PURB, what it happens in the baseline implementation of BitSwap is that PURB doesn't have the content, say, hey, I don't have it, and then PRA has to find its own way to find the content. But in this case, what we will do is that when PRA sends a watch message to PURB, PURB will start a relay session and broadcast like forward these messages until the TTL is zero to its own neighbors. But according to PURB, the content or not, PURB and PURD will answer accordingly to PRA. So what we're doing here is that PRA will be communicating with PURB and PURB as if they were both of them were PRA's neighbor using PURB and PURD as relays. And with this, what we're doing is that we're increasing the range of discovery of BitSwap without having to resort to an external content routing system. And the results were pretty pleasant because, I mean, this is the, here what we had is 30 nodes, where we had like one single seeder, we had a lot of passive nodes that passive nodes in the end, they just run the BitSwap protocol but do nothing. And then we had a lot of leeches trying to find the content. Seeders and leeches couldn't be connected to each other directly. So they would have to, I mean, they would have to resort to a content routing system to find the content because they don't have direct connection or use these jumping BitSwap to find the content in the seeder. And what happened is that we see that the DHT having to do these requests for the DHT to find the seeder is lower than using the TTL and jumping through the passive node to find the seeder and use the seeder as the relay to communicate between the leecher and the seeder. And another interesting, so what happened here, we see these results, what happened here is that we say, okay, so we're sending a lot of want messages, we are exchanging a lot of information between nodes, we have a lot of requests for and with the network, what if we mix the jumping BitSwap, so the use of TTL and BitSwap messages with the pure block registry, the want inspection, because as we are getting more information from nodes that are a few hops away from me, I can leverage this information to make more direct search. And this is actually what we did. And what happened is that it actually worked and the fact that we were gathering more information in the pure block registry because of all of these flow of want messages through passive nodes and relayed and forwarded want messages. And the fact that we can make, like instead of sending all these one-half-one-block ones, we know where the block is, we can do directly this one block and get the block back, it meant that there was a significant increase, I mean improvement in the time to fetch blocks. Of course, this always comes with a trade-off and the trade-off is that the fact that we are using symmetric routing, so that to gather the block, we are using the same path used to, so because what happens with P-Ray, if P-Ray goes to the DHT to find who stores the content and see how the content, what it does is that directly it establishes a connection we see. So from there on, like the communication is directly between A and C. For our jumping P-Trub, what happens is that we are using P-B as the relay and P-A may be connected to B and D and B and D may be connected with each other. So the thing is that in the end, there are a lot of messages flowing the network and they may be a lot of blocks, if more than one relay here finds the block, it may be a lot of blocks flowing into the network. And that's why we see this increase in the number of duplicate blocks flowing around in the network compared to the case in which we use the DHT just to discover the node that stores the content and directly communicates it. We are already thinking ways of improving this because actually we use, instead of using the relay session to perform the exchange of the content, we use asymmetric routing so that we just use the TTL to find and discover the node that stores the content and then the same way that we do with the DHT, we establish a connection directly with that guy, we would reduce the number of duplicate blocks here. What is the problem of duplicate blocks? In the end, it's an inefficient way, I mean an efficient use of bandwidth. But this is all that we have tried so far to improve file sharing in peer-to-peer networks, but this is an ongoing research and I invite everyone to join us in this quest. There are a lot of ROFCs with potential ideas of improvements, not only to improve file sharing in IPFS or in BitSAP, but to improve file sharing in peer-to-peer networks overall. So have a look at them and join our discussion in order to give us feedback about what is happening out there. There are already research and development teams building prototypes for the ROFC and coming up with new ROFCs that are being discussed in the repo. So in the end, if you like all of these topics, help us make file sharing in peer-to-peer networks placing files, going into this repo, joining the discussions and proposing new ideas and prototypes. Here you will also find the testbed and ways to replicate the results that I've shown throughout the talk. And that's all for me, please, if you have any questions or you have any feedback.
This is an overview of IPFS integrations across various platforms, devices and network transports - including browser integrations, video demos of IPFS apps on a native web3-based OS on Pixel 3, IPFS content loaded into various XR devices like Oculus Quest and HTC Vive Flow, and mobile-to-mobile IPFS apps via Bluetooth LE.
10.5446/57050 (DOI)
Hello everyone and welcome to my talk on the state of Lippie-Lippie. Today we're going to go over the state's quo and the future work map of the peer-to-peer networking library Lippie-Lippie. But before we do so, I want to give a quick shout out. I gave the same shout out in my previous talk, though I think it's worth repeating. I'm very grateful for FOSDEM 2022 to happen this year and also very grateful to be given the chance to be able to give a talk here. So thank you very much to all the folks organizing FOSDEM. That's really cool, great work. All right. I'm Max. I'm a software developer at ProtocolApps. Within ProtocolApps I'm starting the Lippie-Lippie project. I'm all over the place when it comes to Lippie-to-pea, but mostly I'm maintaining the Rust implementation of the Lippie-to-pea specification. In case you want to, you can reach out to me via email. You'll have a chance to ask a couple of questions after this talk, but for sure feel free to shoot me an email as well. You can find me on the various platforms via MXIntern and also you can go on my website to find out more about me. Cool. That's enough about me. Let's talk about the actual topic. And let's talk about Lippie-to-pea. And first off, let's talk about what Lippie-to-pea is in the first place. I think that's quite important ground to cover first. All right. So Lippie-to-pea is a modular peer-to-pea networking stack. And the big slogan basically is, Lippie-to-pea is all you need to build a peer-to-pea application. It has a shared core and then a bunch of composable building blocks. We'll go into a couple of them. And each of these building blocks and the shared core are all specified in the specification and then implemented in many different languages. So it estimate roughly around seven plus languages implementing the Lippie-to-pea specification. Now the benefit of all those languages, different languages implementing Lippie-to-pea is that Lippie-to-pea can run on so many different run times or environments how they want to talk, call this. For example, if you use JS Lippie-to-pea or Rust Lippie-to-pea compiled WebAssembly, you can run in the browser. If you use Go, whether you use JVM, Lippie-to-pea or Rust Lippie-pea, you can then run on, for example, a mobile or Android. You can run on embedded devices. You can target normal X86 on laptops and so on. So all kinds of different environments. Lippie-to-pea itself is not very useful, though you can build useful things with it. And that's really shown, for example, first off, obviously by IPFS, the Interplanetary File System, which today uses Lippie-to-pea. Lippie-to-pea actually being born out of the IPFS project where some great souls or brave souls decided we want to extract Lippie-to-pea out of IPFS and thus enable other networks like, for example, Ethereum 2 to benefit from the peer-to-pea network in the stack and use Lippie-to-pea as well. So today, obviously, IPFS, Ethereum 2, you have the Filecoin network, you have the Polka.network, but also all kinds of different other projects using and leveraging Lippie-to-pea as their networking layer. Now in terms of adoption, yes, all those projects adopt Lippie-to-pea, but to put more numbers on it, it's kind of hard to measure how many nodes are online at any point in time, mostly due to the fact that Lippie-to-pea is a peer-to-pea networking library, but I would estimate somewhere around 100,000 Lippie-to-pea-based nodes to be online at any given point in time. All right, Lippie-to-pea, where does it live? Here you see a beautiful graph of the OZ layer, even though a little bit dated, I think it helps understand. Lippie-to-pea basically abstracts over L3, L4 in a peer-to-pea fashion, giving you all the powers that you would need as a peer-to-pea application, which would then import Lippie-to-pea and thus leverage all the different mechanisms. I mentioned building blocks early on, and I want to go over a couple of them. I can't really go over all of them. There are many more to that are actually building this cube, but I want to separate the ones that we do go over into two types, one on the left, which is kind of like the basic or most important building blocks that you would need in peer-to-pea world, and then also on the right you have a bunch of peer-to-pea protocols that are building on the protocols on the left. All right, so let's cover all the ones here on the slide. First off, transports, that's quite an important one. Transports in Lippie-to-pea enable you, as you would guess, to get bytes from one end to the other. Transports really being a core abstraction here, they allow you to establish connections in a dialing or listening fashion, and then send bytes over those connections. Lippie-to-pea supports different transports, like for example TCP and Quick and WebSockets, but also in a more experimental fashion, you can exchange bytes with Lippie-to-pea over WebRTC or Bluetooth. All right, once we establish a connection and once we're able to exchange bytes over that connection, we want to secure that exchange of bytes, and that's where secure channel protocols come into play. Lippie-to-pea today makes use of them, one to authenticate the remote peer, so you have that on the connection level, and then also to encrypt the data at transport. Lippie-to-pea supports different transport security protocols, most notably noise and TLS, noise and TLS on TCP, and then TLS only on Quick, for example, UDP, or Quick running on UDP with TLS. Okay, so once we establish a connection, we secure that connection, we want to make great use of that connection, and we do so with multiplexing. It allows us to run multiple logical connections over a single connection, and you might be asking, well, why do we need that? Why don't we just, for example, do multiple TCP connections? Isn't that a lot easier? Now, that might have been working reasonably well for HPE-1, so for example, you also see HPE-3 moving to this model of multiplexing, but it is especially important in the peer-to-peer space where establishing a connection is really not cheap. It might not be cheap because we have to do all kinds of different handshakes, but also we might have to hold punch to that other node, which involves many, many round trips to different nodes in the network. So we want to make great use of that connection. We do multiplexing, and that kind of enables applications to have like an isolated connection on top of another connection shared with other applications. We can do flow control over that multiplexed streams and thus have back pressure throughout the entire application. There are several implementations for multiplexing in Libby-to-P. Most of the YAMLX and M-Plex over GCP, but then also if you, for example, use Qwik, you have native multiplexing built into the transport protocol, which is super powerful. All right, so we know how to establish a connection. We know how to secure that connection. We know how to be multiplexed on that connection. But sometimes we can't establish a connection in the first place, and that is really due to NATs and firewalls in our current state of the internet. So what Libby-to-P needs as well is NetReverseal. Just as a small motivation for NetReverseal, we did this crawl through the IPFS DHT in I think 2019, and around 63% of all of those nodes that we crawled during the measurement were undollable. So that's quite a restriction in terms of functionality for a POTB network like a PFS. And it would be really nice to get those 63% down. So what we want to achieve with NetReverseal, you have greater connectivity, but also we want to do that NetReverseal in a way that doesn't depend on any central infrastructure. So we built Project Flare, which is just a project name for NetHall punching in Libby-P. It was added in 2021. It's fully specced. It's implemented in Go, Rust is a little bit catching up, and other implementations still have to do most of the implementation work. Now Project Flare runs on different transport protocols, namely TCP and QWIC. It is very, very similar for those unfamiliar with ITFS, ICE, stun and turn. It is very similar to that with the caveat that it doesn't depend on any central infrastructure. For example, we have a circuit relay V2 protocol, which is very similar to turn. We have signal protocols. We have, for example, AutoNet, which is very similar to the defined stun protocol. Now as I said, all of this runs on TCP and QWIC today. And in the future, we want to also support this entire stack with WebRTC, as that gives us great benefit for the browser world. All right. Cool. As I said, we pretty much now covered the left side, which enables all of the right side. And now I want to cover a couple of peer-to-peer, high-level peer-to-peer protocols, even though those are definitely not all of them out there. First off, discovery. So within your network, you likely want peers to discover other peers and really randomly discover other peers. And you can do that within your LAN, for example, with the P2P with MDNS. So multicast DNS within a single LAN, you have a rendezvous implementation, which is similar to Bootstrap node, where you go to that node and that node, and then tells you about a bunch of other peers within the network. But also we have random peer discovery built into other protocols, like, for example, GossipSub, which would enable you as a GossipSub user to then exchange peers with other, with your peers around you. All right. Now once you've discovered peers, you might also want to route to specific peers or route to specific content within the network. And Lippitopee offers DHT implementation, namely based on the Cadamlia paper. Definitely worth a read here. So Cadamlia or Lippitopee, Cadamlia and Lippitopee enables you to find nodes. It enables you to store data on the DHT, but also find providers of certain data. Thanks for routing, Lippitopee also enables you to do messaging. So for example, there are different messaging primitives, but most notably, I think the one to highlight here is GossipSub, as it's most well researched, again, worth a read as the paper. GossipSub enables you to do GossipStyle message dissemination, but it does so in a cool trick, as it eagerly pushes this data to the ones that it knows are interested in that data, and then lets other nodes lazily pull that data if they want to. And that enables you to achieve all kinds of different security protocols, which are quite relevant in a peer-to-peer network. All right, so once you have messaging, another building block that I want to highlight, again, not the only building blocks here, is DataExchange. And Lippitopee would offer you something, a protocol called BitSwap. BitSwap, I think the name speaks for itself as a DataExchange protocol, which does DataExchange a very smart way around who to ask for that data and in what fashion to ask for that data. A bit too much for an introductory talk. Again, I would recommend reading the paper, but also there is a lot more documentation, which I'll point to later on. Okay, cool. Just to summarize, on the left we have the basic protocols, like for example, transports, to get bytes from A to B, secure channels to secure those bytes, multiplex to make great use of those connections, and then natural reversal to establish connections in difficult scenarios. Then we have different protocols building on top of those basic protocols. So Discovery, for example, MDNS, routing with Cadamlia, messaging with GossipSub and DataExchange with BitSwap. All right. Cool. So that's enough in terms of what libpdp is. All of that is specified in the libpdp specification and then implemented by different, very many implementations. I want to highlight a couple. First off, I think that's quite obvious, the Go implementation, GoLibpdp, which has been the driver of the libpdp space in the sense that most protocols are first implemented in GoLibpdp and then later on trigger down into the other implementations. Then REST libpdp, which is pretty much on par with GoLibpdp, low lacking behind in a couple of protocols. And then also JS libpdp, which targets both the browser world and the Node.js world. Now that are not the only implementations. There are many more and many community-driven implementations. For example, there is the C++ implementation, there is the Java implementation or JVM implementation, for example, useful if you ever want to target Android. There's an NIM libpdp implementation, there's a Python implementation. And if I'm not mistaken, the newest kid on the block is Erlang with Erlang libpdp. So we have many implementations. Where are those implementations used today? And let's go over a couple of projects. I think that's pretty obvious. It is used in IPFS. We raised this multiple times by now. IPFS has roughly, like within three hours, you see around 55,000 unique nodes, though those numbers change a lot. Again, it's quite hard in a peer-to-peer network to estimate the number of nodes. We have quite a lot of nodes staying up very long, for example, around 18,000 nodes for more than three hours. So it's quite a large network here. And probably the largest one are used using libpdp. IPFS uses pretty much all the protocols that libpdp has to offer. Most notably, obviously, the Cadmium implementation, as that is one of the core routing protocols within IPFS, but also other implementations, like basics, things like ping and identify and so on. All the data that I'm showing here on the next couple of slides, you can explore yourself via cadmiumexporter.max-in.e, which is a small crawler that explores all those networks. So in case you want to play around with that data, feel free. All right, next project that is also using libpdp is Ethereum 2. Then hard to get numbers, but somewhere between 5,000 and 6,000 nodes active online, even though I don't know how NodeWatch the data is from NodeWatch.io measures that data, whether that includes non-routable peers, for example. Ethereum makes heavy use of the libpdp stack. Ethereum, for example, uses gossips up and was quite involved in the research of gossips up back then when it was developed. Ethereum, for example, builds on top of libpdp with its own protocols, like, for example, diskv5, which is a discovery protocol. So very cool to have Ethereum 2 use and be based on top of libpdp. Another network is Filecoin, not as large as the IPFS network in terms of number of nodes, but again, quite a lot of them with above 5,000 or somewhere around 5,000 nodes seen within three hours. Filecoin as Ethereum 2, for example, makes heavy use of gossips up and makes use of all the other protocols. It's especially interesting as it transports a lot of bytes, obviously, Filecoin, Filecoin network, and thus is a heavy user in the sense that we need to optimize libpdp quite a lot for Filecoin to be able to shuffle those bytes around, which is quite cool. The large network I want to mention here, and actually the driver of the Rust libpdp implementation back then, is Polkadot, which is fully implemented in Rust. It has, again, hard to measure, but around 2,000k nodes up. Polkadot makes use of all of the basic libpdp protocols, makes use of Cadamlia within libpdp, but also implements its own protocols on top of libpdp, which is cool to see. Another project which I'm quite happy to have within the libpdp space is Bertie. Bertie is the messaging app, and Bertie, the cool part about Bertie, is a messaging app built on top of libpdp, enabling offline-first messaging. Now, I usually explain libpdp from the example of a chat application, so it's really, really cool to see a professional chat application built on top of libpdp. Bertie can make use of many of the libpdp protocols, but also has its custom Bluetooth low-energy transport built into libpdp, which then enables it to really do off-the-internet communication. All right, so we learned what libpdp is. We learned about the many implementations. We learned about the projects that use libpdp. Let's last, as a last section, dive into where libpdp is heading. And this is all defined, but also collaborated on the roadmap within the specification repository. You'll find that at the bottom. A couple of things that I picked out from that roadmap, even though there's a lot more involved in that roadmap. First off, libpdp being a peer-to-peer networking library. In my eyes, net hold punching is very important. We made a lot of progress here, but I think, well, it is fully implemented and go and released and used in IPFS, but there's still a couple of implementation lacking behind, and we can still improve a lot on net hold punching within libpdp. So that's still definitely on our roadmap. Then another item is efficient connection handshakes. We have a new specification for protocol negotiation protocol, which we want to bring to libpdp. Long story short, it, for example, enables us to make use of cool tricks around TLS, where the server can send data first in the handshake and so on. This is especially important in peer-to-peer as connection establishment latency is super important as you don't just have one connection to a single server, but many, many connections to many, many different peers in a peer-to-peer network. And the last thing on our roadmap that I want to highlight here is WebRTC. WebRTC gives us a huge boost in the browser space, namely that WebRTC is the way for us to do hold punching and in general peer-to-peer connectivity in the browser, and really the only protocol that enables that. So for example, WebRTC has built-in hold punching, but also WebRTC allows us to connect to endpoints which are not secured over TLS. Again read up on the roadmap. I think it's a cool document to read and there are a lot of North Dark golds on there as well. All right, so that concludes my overview of libpdp. Again I'm very thankful for all of you joining. I'm very grateful for having been given the opportunity to give a talk here at POSTA. If you want to learn more, head over to the documentation. If you want to connect to the community, join the forum. But also if you want to learn more on the nitty-gritty details of libpdp, I think it would be worth checking out the specification and obviously also the many implementations. And lastly feel free to reach out to me directly if you have any additional questions. All right, thank you very much. Implementation user MIT that might miss one of the projects. Each of the projects within the libpdm browser itself has a GitHub repository so you'll find the license file there. But also all the community-driven projects can really choose their license so I'm not familiar with all of them but it should be a bit of a slide. All right, is there a C implementation there? I think there are two. I'm not deeply familiar with them. Also I can't really comment on where they are at in terms of how much they have implemented off the libpd specification. I would suggest just search with your favorite search engine libpdpc. There's also a C++ implementation if that's easier to FFI to or obviously the Rust implementation if you want to FFI to that. Okay, is BitSport the sort of bit-toring protocol over libpdp? Yes, very similar. You have things like the tit for tat algorithm and so on but with the strength of BitTorrent of BitFop, the idea is that you use DAGs to sync the data and thus sync it across many nodes and thus have a speed up in certain environments. How about vulnerabilities appear to be as a protocol of concepts? I'm not 100% sure I understand the question. In general, obviously we want to make, we're not building libpdp for blackout users, that's for sure, for bug nodes either. We have a couple of measures in there where we can prevent certain things. So yeah, I'm not sure it makes much sense to talk about this on such a broad level as libpdp in the network library but rather in the application-specific parts, for example, does that apply to IPFS and some chat applications and so on. A lot of measures built into libpdp that makes protocol evolution easy. If you control your whole stack, all your deployments, drilling out new protocols is fairly easy. Fairly easy. You roll out new versions on the server and propagate it to the clients. But in peer-to-peer world, that's quite difficult. So rolling out new protocols is hard and libpdp has a bunch of measures built in. So for example, protocol negotiation, it uses ProtoBuff for protocol evolution and so on. So a lot of things like that built in to hopefully also quicker ship security patches. Okay, let me see. Okay, how is libpdp different from Bitcoin, Peter P and network or other cryptocurrencies not mentioned in the presentation? Yeah, that's hard as the space is huge. Libpdp is generic. I would take that, like Bitcoin, Peter P and what is it called? Yeah, Peter P and Ethereum 2, Ethereum 1. They're specific to their use case, right, same for the Bitcoin, Peter P. So here, libpdp is really built in this idea of we have a shared core, you have to buy into the shared core, but none of the building blocks you have to use, for example, you don't have to use cadmium if you don't want to. And that also enables you to be interoperable between the many different implementations, right? So for example, Ethereum 2 can talk to IPFS nodes. Ethereum 2 nodes can talk to IPFS nodes. That's really cool. Let me see. Confined anything else. Cool. Thank you for building such a great pit for you. I'm very happy. This is cool. Very cool. I'm happy to hear this. Very, very happy. Quick presentation. Thank you very much. I can look into the Earth extension proposal. I haven't looked into it, sorry. The libpdp stack is a bit difficult to approach. Is there any plans to improve the docs? I hear you. I think it's quite hard myself and I'm working on it full time. We don't have enough people working on libpdp, especially across the many implementations. So this is a community-driven project. So we would very much appreciate more input and also more help from the community. Obviously, that's always an easy answer to give. But at the same time, I only have, let's say, 40 hours a week. So it's kind of tough to have everything perfect out there. Yeah, so the use cases of how do you compose all the protocols together? I think we should have more examples around that to help you. If you want to gossip something, use gossips up. If you want to transfer large files, please don't use gossips up. Yeah, I've seen the examples. I'm very excited for those. Thank you very much. That's very cool to have. Okay. Cool. I think we still have two minutes. We don't have to film. I'm happy to answer any other questions. Otherwise, as I mentioned in the talk, yeah, you can join the talk specific group. I think that will be either posted or has already been posted. And otherwise, there is the libpd matrix chat and you find us on GitHub and then the forums and so on. And hopefully you'll find the recordings of these talks. So that's really cool from the post inside. Yeah, for sure. More examples, more blog posts. That's always very much appreciated. I also have a blog post on whole punching coming up on the previous blog. So we'll go into how whole punching will work in libpdp and then also have examples in the various implementation so you can actually play with it yourself or use it in IPFS for example. Alright, I'll leave it at that. Thank you everyone for joining. And yeah, thanks for the organizers, for the Boston organizers. This is really cool. Yeah.
libp2p is a universal, cross-platform, multi-language, modular peer-to-peer networking library powering multiple large-scale networks, for example IPFS, Ethereum 2, Filecoin or Polkadot. We will discuss the current state of the project, eyeball the various language implementations, take a look at the many live networks running on top of libp2p today and finally cover the project roadmap for the years to come.
10.5446/56871 (DOI)
you Hello and welcome to our presentation about canvas rendering. I'm Gökár Shter. I will be presenting you why we are using a canvas object and how we are using it and what the results we get at the end. Colobar online as you know uses images sent from the core site for rendering the documents. Users need to be able to view documents comfortably but the technology is rapidly improving and it's created high resolution devices with tiny pixels. As we aim to have consistent look across as many devices as we can we need to have a new solution for having crisp images in all targeted devices. We choose to use a canvas object. One of the problems we saw was blurry images. Another one was gaps between tiles. HTML canvas object helped us overcome these issues in a proper and maintainable way. We will first talk about how we render the tiles and UI. Then we are going to have a look at our implementation. As last step of our presentation we will have a look at the great results we have. And let's move to the second slide and we will have a look at benefits in detail. Colobar online is now using an HTML camcelam and for rendering the tiles along with consistent look there are other benefits that we have with this choice. Let me talk about these items. It is easier to maintain and improve. We now have our own class canvas section container. Thanks to this new structure we have control over events and UI. It's designed to work isolated and flexible enough to be improved. At the end we reduced the need for the third-party libraries that we were using for event handling and UI. One of the challenges of using a canvas for UI and event handling was with testing. We are using CypressTests library for testing the behavior of our library. Unlike unit testing Cypress library works like a real user and it checks the HTML elements states to see if the software is working or not. Since our canvas drawings are inside canvas and thus not HTML elements it was a challenge to test the behavior of our drawings. We solved this issue by creating cross-pending HTML elements for our drawings. These HTML elements are created only while testing. Now we have testable and ready to use canvas library. Our main target was rendering the tiles onto a canvas object. This is done by using more tiles for devices with tiny pixels so users can benefit from the high resolution of their devices and rendering the UI. After having a class for canvas rendering we could easily use this class for also some parts of our UI. Our gain is fast reaction time and implementation. So having some things drawn on a canvas element does make those work for the user. We needed event handling certainly we needed to know about where the mouse pointer is. User holding a button and the like. For solving this issue we added event handling capabilities to our new class structure. And the first look of our new class. That was a beginning. These images from the days that this class was just a proposal. The first implementation was looking like the image you see. This picture shows the main goals of the new class. Rendering the tiles on a canvas, having a UI tool for the UI elements like row headers in calc and event handling. As you see in the picture all the UI elements are on their places and if you resize the window with this technology with library we created they will be moved to the edges so they will keep their relative positions all the time. So this is a tiny UI library that we are using now. Also the event handling is making this UI library work for the users too. Features and challenges. And of course there are some challenges we needed to overcome while we develop our library. For example event propagation. It is easy to handle events if you have only one layer drawn onto a canvas. But if there are overlapping elements and they need to handle the events separately then it may be challenging. Let me explain an example situation. We have tiles as a layer and shapes as another layer. A shape certainly overlaps with a part of the document. We also have scroll bars. At least two of these three objects overlap. Camus section container needs to decide which section will handle the events in this case. All of them, many of them, are only a sub-selection of them. We turned these challenges into features and now we are using the new class with many layers. And for event propagation this picture shows a tiny code piece from our implementation. Event handling has always some difficult sites. Like when the mouse pointer moves outside of a layer while dragging or mouse pointer can move to outside of the canvas element or even window. In these cases some checks are required. After having these kinds of checks for mouse events we needed to have a look at the touch events. The library we had was checking if the device is touch enabled or not. And this check sometimes makes it difficult to use devices with both touch and mouse devices enabled. We unified the touch and click events in our library for solving this issue. So we can now use our core library for handling touch events and mouse events at the same time. This is an image from our implementation. Crisp images. This was the most challenging part. I spent quite good time while implementing this and thanks to Michael and Candy for their mentoring. Making the images look crisp on every device that Kolobora Online works was the main target of this library at first place. Because of window device pixel ratio property of the browser that is a new, relatively new property, crisp images were difficult to render. This value is sometimes equal to something like 0.9 or so. We have tiles with dimensions 255 and 255 pixels. When we draw these tiles onto a device with device pixel ratio 2 they are drawn half-sized. We kept as the solution for the solution we kept the height and width of our tiles and we increased the number of tiles when device pixel ratio is above 1. We also solved problems with floating point numbers. In the end the resolution of the document became good once again. This picture, this image is from our UI of Calc. And as you can see we have now many sections on the on drawn on the canvas element. The side cursor, scroll bar and row and column headers are all drawn objects onto a canvas element and they are handling their own events and the users can see the tiles crisp again. Thank you for listening.
I will present why we needed to use Canvas for rendering the UI and the document. Then i will explain the structure we created for this task.
10.5446/56872 (DOI)
Hello everyone, my name is Pranam Askaray and I am a software engineer at Colabora. Today I am going to demonstrate how to set up Colabora Online on Kubernetes, talk about some of the things we need to care about while deploying it and some of the challenges I faced while configuring this. So let's get started. First thing, how it works. So if we try to deploy Colabora as any other normal services without using Ingress or any such complicated things, the entire cluster system would look something like this. This would work completely fine in normal scenarios if there is just one user using it. But when we try to use collaborative editing feature, this setup would fail. Why it would fail? So there are chances that there are two users trying to access the same document. And the request for those documents, there are chances that the first user would send the request to this first part and the second user will send request to the second part. And as you can see, there are no communication between these parts. So both of those users will be completely unaware that if there is another user trying to edit the same document and the entire collaborative feature will fail. Though both the user will be able to edit the document, but the changes the other user have made to the same document will not be reflected. They will have to close and reopen the document, which is very, very inconvenient and kills the purpose of collaborative editing feature which Colabora has. As I said, that system would fail. So how to configure it so it works. So while using Colabora online on Kubernetes, Ingress becomes compulsory. Though in the production, everyone uses Ingress, but Colabora online mandates it and there is no way this is going to work without load balancing. So how do we load balance? We use OPSRC parameter, that is a URL parameter. And if the request has same OPSRC parameter, those requests will end up in the same part. The same document will always have same OPSRC parameter. So it doesn't matter how many users are trying to access the same document. It will, the request will always end up getting in the, in the same part. We have provided configuration for HAProxy and Engines Ingress Controller. So you can directly use this Ingress Controller without any asset, but you can use any Ingress Controller of your choice, but you will have to configure it. So after using Ingress Controller, the system would look something like this. Ingress Controller presides where each request goes. So it doesn't matter how many users are trying to access. If it has same OPSRC parameter, it is always going to end up in the same document. But when we are load balancing in this way, there is a slight problem. There are chances that a document has so many users, it may cause more load on one port while other ports are setting ideal. But there is no solution for that at the point. So we have to deal with that the way it is. As I said, I have provided configuration for that. And that configuration can be found at this link. Today's demo, I'm going to follow the same steps provided on this link in today's demonstration. So let's get started. So I'm going to use Minicube for today's demonstration. So first thing I have to do is start Minicube. First thing, once this Minicube is started or any cluster of your choice, we have to do is install the Ingress Controller. Minicube by default uses Engines. So this command will enable the Engines controller. Once it is done, the configuration I have provided uses the namespace Collabora and puts all its component into the Collabora namespace. But it is an optional step. You can choose not to use the Collabora namespace and choose any namespace you want. And you will have to configure it accordingly. So configuration which I have provided uses the Helm. I have already installed Helm on my system. So I'm not going to use it but install Helm charts directly on the Kubernetes and this command will assist us in that. So basically we are done configuring Collabora online on the Kubernetes. That's it. This was as simple as it could get. Let's check how it is going. So as we can see we got all the parts up and running. Let me just quickly close it. So next thing is we have to set up the host and IP for our Minicube. We can check it with Minicube IP. I think I have already configured it but let's check it. Yes, as we can see I have already set up host and IP. You can check your IP with Minicube IP and this is the host we have provided in the configuration. You have to change it according to your choices. So let's check if this link is working. As we can see it returns okay. So it is working. Now let's check it in the next cloud if we can actually edit any documents or not. My next cloud is working on this IP address. Yes, as we can see I can edit document. Let's try to add another user here. I have already created one demo user and share the same file with them so we can check if the collaborative editing feature is working correctly or not. As we can see we can see both the users editing the same document. If I try something it is also edit that and if I try to edit something here it is added here. So even the collaborative editing feature is working correctly as we would expect and that's it. This was how we would install Collabora online on the Kubernetes. Thank you.
Demonstration of how to deploy the online using kubernetes.
10.5446/56873 (DOI)
Hello and welcome to this talk about the Curve-based HTTP and WebDive UCP in LibreOffice. My name is Michael Stahl and I work for Alutropia Software. For this talk, what we are going to do is first describe the problem that we are trying to solve because it may not be obvious for everybody in the audience from the title of the talk. And then we are going to discuss what we actually did to solve it. And we are going to conclude with the current status at the time when the talk was recorded. First, let's start with the term UCP. What even is that? So in LibreOffice, there is a sort of virtual file system abstraction, which is called the Universal Content Broker API. And this basically identifies all files or they are even called contents in this API via URLs. And this API is generally very abstract and you have basically a main function to do anything that takes a string that describes the operation that is to be performed and a sequence of anis. So yeah, it's very abstract. And then there are these components, which are called UCPs. And the way it works is that the broker identifies which component can handle a particular URL via the scheme in the URL. And so for every scheme that is supported, there is a UCP component. For example, we have one that handles the file scheme for local file system access. And then there is one to handle the web protocols like HTTP and the distributed authoring and versioning that is essentially an extension of HTTP with additional methods. So there is a UCP for that. So due to its interesting and varied history, LibreOffice had some issue with this HTTP UCP. And the main issue is actually that there is not a single UCP, but there are two UCPs in the GIT repository. And the older one is the Neon one that was inherited from openoffice.org. And this is a very mature and robust code that has been maintained over the years. And the Neon library, which it is based on, implements not just the low-level HTTP protocol, but also certain web-tough aspects such as parsing the various XML formats that are used in this protocol extension. But the problem with it is that it is licensed in such a way that Apple doesn't allow distributing it via the App Store. And these days the App Store is becoming increasingly important for Mac OS users. And it's the only way to get anything distributed to iOS users. So yeah, that's a bit of a problem. And the other issue with it is that it requires OpenSSL to implement encrypted transfers. And that then requires us to bundle whatever trusted certificate authorities are built into OpenSSL. And the user can't manage that. It's all basically hardcoded there. And the other option was the Apache surf-based UCP. This one was imported from Apache App Store at some point. And this also has the problem that it requires OpenSSL. And it has the other problem that it actually requires three different external libraries to be bundled. And the main surf library is problematic to update because at some point they removed the build system that we were calling from the LibreOffice build system, replaced with a completely different one that requires its own build tool. And yeah, if there's one thing that we don't want in LibreOffice, it's yet another build tool to add to the build requirements. But on the plus side, the Apache surf-based UCP was licensed in such a way that it was possible to distribute it in the Apple App Store. Another issue with the surf-based UCP was that since pretty much everybody was building their LibreOffice with the neon-based UCP, the code was rather poorly maintained and it actually broke rather badly some years ago. So in order to solve this situation, we had the idea that it would be best to implement another HTTP UCP based on the curl library. And why this library in particular? Because we already have to bundle Libcurl because the UCP for the FTP protocol and the one for the CMIS protocol already require Libcurl and also I believe the automatic update checker. And one advantage it has is that it can use the TLS stack of the operating system on Windows and on Mac OS and it can also use the NSS library. And this means that on these operating systems, the user can then use the operating system user interface to manage which CAs they trust or don't trust. And another advantage of this library is that Apple allows it to be shipped in the App Store. And so this has the potential to basically make the other two UCPs obsolete. And therefore the Doctrine Foundation has tendered this project to implement this new curl-based UCP and we have one that tender and basically implemented this last year. So now for a high level overview of what this UCP actually looks like. So basically there is the upper layer of the UCP which provides a Unile API that is then called by LibreOffice and this translates the calls that LibreOffice makes from their generic whatever sequence of any stringly typed abstractness into basically function calls that correspond to HTTP or WebDef methods. And it also does a bit of high level protocol handling first to figure out which features are supported by the particular server that it's talking to. And so there are basically several different levels of WebDef support. A server may support locking or not and of course a server could also not support WebDef at all and be a basic HTTP server. And all of the code in this layer is basically independent of whatever low level library is used at the bottom. And secondly then in the middle layer is the lower layer of the UCP and this translates the generic HTTP function calls into whatever the particular third-party library that is used expects and it hooks up various callbacks so that the data can be transferred out of the lower library and also for authentication. Because the way authentication works is that you try to access URL on the server and the server will tell you that you don't have the permission to access and then you need to basically ask the user to provide some credentials and try again. And yeah this lower level also contains some parsers for various XML documents, XML formats that may be returned as part of the WebDef protocol by the server. And then at the bottom of the stack there is whatever third-party HTTP library is being used and this can basically be bundled with LibreOffice as it is in TDF builds or if you use it from your Linux distro it will typically be used from the system. So in a little bit more detail the most important classes in the UCP are first in the top left corner. The entry point of the whole component is the content provider class which is basically a factory which the broker calls to create the content instances. And then the main class of the upper layer is the content class which implements the UNO API. And so basically this is the class that the whole LibreOffice code will call into to initiate the transfers. And then there is the DAF resource access class which sits in the middle between the content and the low level curl session class. And this will use DAF session factory to create instances of the curl session class. So there is one instance of the content class per URL. And depending on how the transfers go sometimes it will be required to create a new curl session class. And yeah, then the curl session is the most important class of the lower level of this component. And it's responsible for interfacing with the curl library. And then there is also the surflogstore class which is a singleton that is basically running a thread. And it has a map of all of the web daf logs that have been received from every server basically because it's a singleton. And it will try to refresh those logs on the server at the time or before the time when they are expiring. And then the other class here in the lower level is the web daf response parter which just parses the XML documents that are returned from certain web daf requests such as log and prop find. And then there is the DAF auth listener impulse class which when the curl session finds that the server requires some authentication it will call up into this class which will then proceed to call out via some unia API to request credentials from the user via a dialogue so the user can then enter a username and password or on windows there should also be the possibility to use NTLM authentication via the operating system. And most of the classes here started out as copies of the surf UCP code except that the call session and curl URI which is another rather small class those are entirely new. So how did we implement the curl-based UCP? First we started by copying the upper layer of the surf UCP we choose that one for licensing reasons and then because this code really hasn't been maintained in a while it turned out that there were hundreds of warnings from the clang static analysis compiler plugins that we are using in the LibreOffice build and so we had to fix all of those warnings first and once the code was basically compiling we wrote the low level part that would interface with the curl that is the curl session class and so on and once we had that it was possible to fetch documents but trying to store a document would still fail again because of the un-maintained surf code there were some changes in the framework part of LibreOffice that required changes to the UCPs and that was only done for the Neon one. So then we had to go through all of the changes that happened in the Neon UCP since the time when these two code bases forked and we looked at all of these changes there were surprisingly many was around 450 commits there and Jerry picked the ones that looked relevant to the curl UCP and yeah we had to be careful that we check the licensing statements for each of these for each of the authors of these commits and it then turned out that the number one author by number of commits of the curl UCP turned out to be not me but Giselle Castagnu who did a lot of work back some four or five years ago and in some cases turned out that the commits were conflict with other changes that were made in the surf UCP and sometimes was easier quicker to just reimplement the change instead of resolving those conflicts and other changes were just trivial in nature and those didn't look very important so we skipped them so there may be some stylistic cleanups that could be done in the curl UCP code and we also omitted certain features that are rather dubious value these days for example there was a way for the UCP to pop up a dialogue for the user to confirm whether they want to accept an invalid server certificate and yeah don't think anybody wants that sort of feature once we were done with that we started to test it with some real web-dav implementations and at this point a new colleague Galois Kellerman jumped in and did a lot of work and we started out with the Apache HTTP with mod.dev and this found some bugs in our new code and overall we found that this server was behaving quite reasonably we have no complaints and then the next one we tried was nextcloud and this also found a couple of bugs in our code but the server had also a very nasty surprise so if you try to put a file to store it and use the trunked spot encoding what happens is that the server reports yes this was created successfully but the actual file that was stored was empty so that was not ideal so that's why we don't use the trunked transport encoding for put and then we tried with SharePoint and yeah this required a bit more changes so first it also had problems with trunked transport encoding and it also was kind of funny that it replied that it was okay but the body that was returned from the request was not the XML document that we expected but instead an error message that was in HTML so not even the right MIME type and then it also had some funny behaviors where it would reply to the same URL either with okay or with that doesn't even exist based on which method you use and then the web.dev protocol allows storing properties on each file and for some reason it calls user defined properties which can basically be given any name they are identified by an XML namespace and local name and it calls them dead properties I don't know why and basically during testing it turned out that the server really doesn't support user defined properties and the way it fails is actually quite funny so to conclude what is the current status so basically the implementation is done and the new curl-based UCP is shipping in LibreOffice 7.3 and we were able to remove the two previous UCPs already on the master branch and all of their external libraries that were bundled just for these UCPs and this resulted in a net reduction of around 17 000 lines of code and a few regression bugs were found and mostly fixed I hope it also turned out that one tester claimed that the new UCP is somehow more performant than the neon UCP was and to be honest I don't know why that would be the case and one nasty surprise was that there are apparently servers out there who really don't like curl so we sent out a user agent in the headers of every request that includes the curl version number and basically this server decided that if the user agent contains the word curl then it will get a permission denied error so now we are at the end of the talk and I want to thank the document foundation for sponsoring this project it would not have been possible without that and also thank you in the audience for your attention okay so Dennis was asking why was the dialogue for accepting invalid certificates not implemented yeah it's because as far as the no browsers don't pop up a dialogue anymore when you try to connect to a server with invalid certificates so why should LibRofist do it basically
LibreOffice uses Universal Content Providers to access files via various protocols. Due to accidents of history, LibreOffice contained 2 different UCPs for WebDav and HTTP, one based on neon and the other on Apache Serf, each with different bugs and bugfixes. For LibreOffice 7.3, thanks to a tender from The Document Foundation, we have replaced both of them with a new UCP based on libcurl, which is designed to meet all currently known requirements, and is able to use the operating system's TLS stack on Windows and macOS.
10.5446/56876 (DOI)
you Hey everyone, welcome to this talk which is about improving a coverage analysis for LibreOffice in the continuous integration platform that we're running. This is a joint project done by three people, Linus, Swantha and myself. This project is funded by the Prototype Fund, it's German open source funding run by the Open Knowledge Foundation and itself sponsored by the Federal Ministry for Education and Research and we're very grateful for that support. Great, so what is this all about? For LibreOffice and our continuous integration platform, we'd like to first of all develop some glue codes to integrate different data providers so that we can tap into this rather rich ecosystem of tools. Once we done that, we will get lots of nice shiny new tools that we will at least try to integrate some of them into the existing Jenkins instance. LibreOffice provided that it will be fine with that. Doing that would probably create incentives or would at least enable the project to easily tweak things so that incentives can be generated for QA and developers to do the right thing. And last but not least, we would love to provide an automated means for especially newcomers to locate features in LibreOffice code. So this very frequent question, I'd like to figure out why some RTF thing is not working or I'd like to add a feature to a filter and where we'll find the code and it'd be great to have something that provides that answer without human interaction because that's always a strain on mental bandwidth now. Okay, so that's us. There's also a contact page, project page for that. If you look at the PDF version of the slides that you find on the FOSTAim schedule, you also find the email address is there for us. First of all, what is the very first problem number zero? We have lots of nice tools for programming languages. Usually for something like LibreOffice, we've got another of programming languages that we need tools for, like for example, coverage analysis. So for example, for C++, you've got at least two tools for doing coverage analysis and both of them generate, their backend generates data and the common denominator usually is providing some static HTML report pages. Same story for Python, same story for Java. So you've got any programming languages and coverage tools and in Sultry Injury, then you've got a number of CI systems like Jenkins, like GitHub Actions, like Traverse CI, whatever, or something you got there. So if you only can use the intersection of your programming language and your coverage tool and your CI system, you sometimes end up pretty empty-handed. So I'd like to solve that problem the computer science way, which is just provide integration APIs between those, like I have a chat, API for coverage information, and then provide glue code or adapters for programming languages, analysis tools and also for CI systems. Like C++, we've got Elkoff for the GCC, we've got VMKoff, we've got GKoff that's parsing that, we've got to recover that is another tool that also parses that, but optionally is able to generate XML output, etc, etc, and then you have this on the CI side. What seems to be Lingua Franka's cobertura XML, there's lots of plugins there, so that seems like a first good approach to convert everything to that, but we're also planning some rather abstract API for that, like language server, which solves exactly the same problem. So you have neditors and n programming languages, and you want to have some syntax highlighting, source code analysis, and the way to make this feasible is to have an interface layer, which is the language server, so you want to provide your syntax highlighter once, and then all the editors using the language server suddenly can highlight your language, so that's the plan here as well. Okay, so let's look at what Jenkins ecosystem, that's where we're most interested in, because LibreOffice is running Jenkins STCI tool, already has to offer coverage, so there's Jack O'Cow, which does something quite similar to LCAV, but it's nicely integrated in Jenkins, so you get a report for the build right there on the job page, but kind of sucks, because Jack O'Cow is Java only. So next one up is cobertura, I mentioned it, that takes this XML file that is somehow produced by your build, that can be Java, that can be C++, and again kind of renders some nice static report out of that, which is kind of okay, but again you would then for every tool, for every analysis you'd like to run, you need to write Jenkins plugin, and then you need to write a GitHub Action plugin, etc etc. So much better is the code coverage API, which is another plugin, but it's like one level up, which is taking a number of input formats, eating that and then producing reports for that, so it's kind of tool agnostic, it just takes rather generic coverage data in a number of formats, and then produces nice reports out of that, which is much, much closer to what we need, so that would be the model that we would go for. The last drawback here is that this is again only for Jenkins, so the last O-step is missing here, but it's like getting two-thirds of the way. That's eating cobertura, that's eating Jack O'Cow, and that's eating LLVM-Coff files, and it has some nice inter-extending points if you want to extend that with your own format. More features, that API plugin also has some nice drill-down lists, so some lists of files and the coverage there, and I think this is really sexy, some lab that the size is like lines of code, and the color is like what's the coverage, that percentage of lines covered, so to give some really nice overview about code base that it can drill down, where perhaps a bit more test coverage would be would be great. Okay, so a few more tools that I can cross, this is mostly Java, but there are sometimes honestly plus-plus equivalents, Spotbox, QuickRun, Clantidea probably, or some other, or CPP check, always nice, if you have that, not as a separate tool, but right there in the Jenkins drop page, so you have a patch on Garrett, and gets you a nice report on how this patch is looking, rather than one week later you might or might not get some warning from the Tenderbox, or you might actually be going to some CPP check page and see how the code is looking. Check style, yeah, not sure Clangformer probably, but we can also have Clangformer as a heart. No, right now, so you run this as a pre-commit hook, but possibly we could have some hints there, like things where there's no clear right or wrong, but some some preference from the project, and then you still get some something nice here. PMD is Java only, possibly we could run something like asyn there, depending on the computational cost for that. CQD copy-paste detector sounds very useful to have that, so if there's no new code being introduced that copies a lot that would be bad. We could also run additional things like, basically everything that people are in the community are running and have some script for that, like spell checker on comments or other things. You could run this in CI and get this right there before things get merged, so it's down the road fixing. We wouldn't have that anymore. Right, so creating incentives. We really should get more into the habit of alternating what is possible for reviews. I think we started to do that and then we stopped and then there's still a lot of fixing up behind after the fact after things got merged and I think we can iteratively improve on that. Beyond this coverage thing that I started with, which is also really nice for creating incentives like increasing the coverage or even finding the places that have weak coverage and then having easy hacks to match people to work on that. There's also a nice way with incentives, with nice metrics on the CI page, is not to block things from getting in, especially when there's no clear hard yes or no right or wrong answer, but by nangering developers by suggesting changes. So if there's a nice report, as it's nicely accessible right in front of your eyes, but it's still a judgment call, so you can still overwrite that, but it's probably more often than not, you would actually take that suggestion and act on that. And then beyond that, if there's a clear project preference, for example we had this for 4G build, for a comment translation, there were lots of German comments, the moment you get metrics, you create incentives for people to get those metrics into the right direction. So that's also a nice way for the project leadership to create direction and to create action by doing that. And it's pretty easy to do that in the CI system. For coverage, that's already there, and those metrics are easy, I mean those graphs are just that's part of Jenkins, the Jenkins ecosystem, so it's not particularly hard to create that for any kind of metrics. And then you can see like before the patch, after the patch and usually you want to get certain things down and other things up, and as long as the trajectory is the right one, you will get full marks on that. And then you can encourage people by reporting on that, like 20 patches or full marks by developer A and then have some nice report on the other video you need. Bottom line really try much harder, we should try much harder to put things early in the pipeline and try things that get fixed up after the fact earlier into the development workflow. And if you already have scripts for something like I don't know, spell checking, coverage, CPP check, other things, asynapse and crash testing, try at least if that is possible, computationally possible to get this earlier into the contribution chain. And then obviously encourage more automation. Okay, that was that. We will try to get as much as possible done with the project funding that we have. There's one killer feature that I'd love to see happening, we will see how feasible that is. It's not a thing that will just fall from the sky, that means like work, technical work to make happen. And it also needs probably some work to create dedicated tests for that and that's a feature map also. So again the underlying problem that we're trying to fix here is the question, where's feature XY, where's the feature that renders beveled borders or the shadows or where's the filter code that imports the bold attribute and ODF. We get already most of the control flow for a feature, at least the static control flow by the coverage. So you run a test, one test, not all of them, but just one test and you get a coverage and that's the code that's been touched by that or triggered by that specific test. So you have a document that has something there and then you load it and then you compute the coverage and you get the result. And you have a second document that changes tiny little bit, like just add an image, you do the same, you load it, you create the generator coverage and then you have a second coverage result. Now if you subtract the two, you will find the code that is solely used by the diff feature, like the new thing that you added now. So if you have a very simple document and you make text bold and then you load that and then you div it from the non-bold document, you will probably find the code that is somehow involved with making this load and render and display the bold text. That's pretty simplistic for something as complex as LibreOffice. It's not a clear how well that will work. Still having tests that really tests in an orthogonal way test features, like one by one, is something that we mostly lack. Right now the testing is very, very complex things. Usually we have a bug and then this bug gets fixed and the bug fix gets tested and it usually mixes many, many things in one document and it's not very orthogonal. I would think more than 90% of our documents are largely triggering the same code and then there's very subtle differences that we're actually testing with that. But this rather orthogonal unit testing approach where we try and trigger independent things in the code base, we have very little of that. So if that even works, but we would create a nice incentive to have more of that kind of test in the code base. So, yeah, what you get then, or what we hope that we can provide them, is at least some idea of a code-cognitive map, an area of the code that we know what it does and we can keep that or generate that map more or less automatically by running tests. So we would get this. Documentation is nice, but it frequently gets stale or it's not very, very, very, the both. It's not very... For something like LibreOffice with 10,000,000 lines of code, we still need so much context information, so much experience, so much tricks, what to grab for, where to look, what to try, where to put your breakpoint to figure something out, that getting something more automated that gets us a tiny little bit closer to being able to have a query or a search question asked in a natural language and them being pointed at the code or at the file that does something with some probability that will be really quite nice, especially for the mentoring that we're doing and I've been doing quite a bit of that and it's very often this question that you see five hours later on IRC and you were not there and you couldn't answer the question and the person who was asking has moved on because there was no answer and it would just be tremendously lovely to have a chance of pointing people at such a search engine. But we will see whether that has any chance of working at all. In any case, we're at the end of the talk and I'd love to discuss with you what to think about those ideas, take them with a grain of salt and take them as suggestions or as an offer. We'd love to at least do this tools improvement and adding more to the actual CI run. We'd love to do that and we'd love to prototype that but just just let us know if that's a good idea or not. How do you like that? Okay, thanks a lot and I'm looking forward to your questions. Okay, so it's possible that we're live now with questions. The first one here was the suggestion that we could have those feature descriptions cross application which is possible if it's file format based. Okay, so that would be limited to anything like based on a file format but that's definitely something that Swanta has started and been thinking about for many years. Maybe you'd like to comment on that a bit, Swanta. Okay, so there's a link in the, I wonder if we can somehow add that or maybe we can add that to the slide material which you find on the talk page. I can update that after the talk with the link to the OASISTC and there's some starting of a test suite. Let me check. So maybe a while there's more questions coming up. So this crop-missile thing that's kind of work in progress but we do have something where we started. Let me try screen share. So for example, CPP check that's been some long discussions that we had some daily job running there but if you do that in Jenkins it's actually pretty easy. So there's plugins for that. You can add it as part of the job. You get some nice overview here. You can kind of drill down. It runs pretty fast so then you have this little widget here and you get this nicely populated. You can drill down here. You can let's say focus on files or in types or in issues and it's all pretty right in front of you. So same for, we did this because there was some easiest for ODEF Toolkit. So we have a bit more plugins here enabled. For example, the coverage report that I was referring to in the slides that is populating here and this very nice map where you can drill down. It looks pretty red but the app site is that should create incentives then for people to increase coverage. But what's one to do? You wanted to say something. Yes, sorry. I had the audio with the two plugins that were talking to each other here from the two rooms but Michael figured it out. So sorry for that. I heard you in the Canon. So please repeat what you asked earlier because I couldn't understand a word. You were back and forth sending the audio signals. Sorry for that. So maybe you can expand a little bit on this cross application test that Joss was referring to. Okay. That's a good point. With the ODEF Toolkit from the talk that I should have done yesterday, we have this, I would call it let's say high level ODEF API. The people that are not known to ODEF XML, they know there's a table, there's a paragraph, there's an image in the document. Yes, there's a delay here. Do you hear me? Okay. I continued talking just just quickly that we separated by adding these things. Yes, and we tried to separate this in test files. Let's put this way. And by these operational changes we can add and subtract null features. And at the DC we try to do the same thing for new changes. Yeah. It's a bit scary here. So I would try to make a shot. Sorry for the confusion. Okay. So maybe back to some demo thing. So what I also was referring to in the slides was this copy-paste detector that's also almost free. It needs setting up obviously also. It's not free in a sense. It has computational costs. But it's a plug-in. So this one is again the ODEF Toolkit here. But we can run this just the same on the office. It just needs a source checkout. It can be a separate job. It can also be even. So perhaps that's more acceptable for the project. It can be a separate Jenkins instance. So that would still integrate with Garrett. So you would still get some comment there in Garrett that would help reviewers to evaluate possibly. It would create the same incentives. But it wouldn't block like this. I'm waiting for the plus one so that you can merge last-minute merge feature that it wouldn't block that. On the other hand, yeah, CPD and CPP check are pretty lightweight in terms of computational costs. And I think we're at the end of the session here. Let me check if there's any last-minute questions. No. So thank you very much. We'd love to continue talking about the usefulness and how to apply that. Thank you all for watching. Bye, everyone. Enjoy first time.
Improved coverage analysis for LibreOffice's CI. Our journey towards deeper integration of coverage analysis tools into Jenkins CI - a PrototypeFund project.
10.5446/56879 (DOI)
Hello, everybody. Welcome to the yearly First Dem Liberal Office Represently Talk. My name is Jan-Michael Roski and my co-speaker is... Hey, everyone. Torsten Bladens, my name. Welcome also from my side to the Liberal Office WebAssembly updates. The bi-annual one looking forward to tell you lots of new things, new exciting things about WebAssembly. Let's get going. So let's start with the challenges, just a little recap of the stuff and current position we are on. What we are targeting is WebAssembly binary LibreOffice, which is usable in the browser and downloadable. Currently, the optimized version is about 150 megabytes, writer-only. You can also build the debug version, but that gets you to 200 megabytes. In the separate 12 WebAssembly file with debug information. Then we have additionally a virtual file system image to get a lot of fonts available. Currently, we simply include all the fonts that LibreOffice offers. That's another 100 megabytes. So that could be split and downloaded, but still they have to be downloaded for the user. Another thing is that WebAssembly thread pool site is currently fixed to 4 in QT. It's not really an opportunity to enlarge that. So we switched off multi-threaded document loading. And the IPC thread is also not running currently, but that's less of a problem. Then there's still with 4 gigabyte heap or RAM limit you have because of 32 bit pointers. Nothing could be done currently about this, but LibreOffice is running with 1 gigabyte. In theory, there's plenty of space for your documents. And the development and general WebAssembly development experience is still fresh. So it can be definitely be improved. So what's the current status? Basically everything is merged now in master. Also there is no static readme, a Macdon file where you can read up how to do the setup. What you can do on master is build LibreOffice now at a completely static image. So you get a large 1 gigabyte S-Office bin. And obviously you can now build S-Office HTML or slash wasm, which includes LibreOffice wasm version. And it was obviously probably the same as working with VCL demo. So that brings us to all the problems we now have fixed. The static linking is done. The core dependency loops are done with some plugin interface and G-build. Static component generation from the build directly is done. So you do not have to handle multiple dependency lists anymore for components. You can directly build the virtual file system image out of the make process. Which contains easy lists for files you want to add. And script and link time is also down to something like 30 seconds, at least for the debug build. Optimized build, which you will really want for the release because it saves you about 25% of file size. It still takes a lot of time and has huge dependencies on memory. So you just want to do that once in a while. The good thing is it's also the debug data is every time separated. So even if you build the debug image, you just need to download the 200 megabytes. And the one gigabyte debug info is downloaded by the Chrome tools to debug Dwarf in the browser. So let's have a look on the really larger open problems. First thing is the event loops are still a major problem because there are simply too many dialogues. Just grabbing for create message dialogue, which is just a message box, has something like 250 occurrences and obviously a lot of dialogues and writer needs to be ported to other stuff. There's an easy hack to create the message dialogues and you can run your normal LibreOffice with Salu system loop to emulate the M-Script behavior, which will intently crash something called Resadule. It should be quite hackable. Just look at the easy hack in Baxela. Then the other problem which did not really expect it originally is that QT is much more buggy than what we saw in the initial demos where we tested this with the Wurke's threading with the Mandelbrot stuff and it really needs more fixing. There's already an extra branch in the clone on Oli's Allotopia GitHub which has something like 20 Vazem patches mainly back ported from upstream and some work in progress stuff too from us which you need at least for the pop-up menus. But looking at the alternatives, which are not really feasible like porting GTK to Vazem, that's really the toolkit side, not the LibreOffice side which should be largely unaffected. At least if you look at QT, that changes for M-Script and the QT codes are really minimal. Then we had sort of using Colabora online frontend somehow also for WebAssembly stuff but that's not clear how that could be integrated. Maybe use some of it in the end but not as a completely different project. And then there is not so obvious way to really do a whole VCL plugin completely Vazem with abstraction and everything that will fall back on the old generic VCL rendering of widgets which is not first not nice and probably also a lot of work. So currently we are staying simply with QT and we will see what will happen. And then to finish my part of the talk, I will talk about the list of all the stuff that can generally be tackled and needs to be done in some more or less ways like uploading files, use more browser APIs, make some kind of persistence storage for the files that you download and also for the user data. So we really use the normal plugins to do translations but that's not really something that is really needed now. If we really want some IPC communication, we could implement some Unobridge. There are some ideas but it's also not needed now. And we originally expected to do some kind of peer-to-peek communication to talk between two encrypted or encrypted talk between two Vazem instances but that's also something to do after fixing Vata almost everywhere. Porting to a Vazem to a local system run time is something that would be nice but not clear what the GUI is doing there. Maybe you could port QT there too but no idea if that's at some point working. Then there is the possibility to do dynamic loading of Vazem modules and scripting but it needs more support from C-Lang and it's really not feasible currently because then you have to skip thread support which we really can't do currently and we have a long-time plan to see how good Maison can replace G-Build. We will test it back and for N-Scripten but we will see how that will work in the future. That's from my side so now Thorsten will tell you a bit about the rest of stuff and show a demo. Thanks. Okay, so let's get a bit more hands on here. What we have adopted Libro for Zwebysemly, short-hand LOVA, as Jan-Marc was pointing out and explaining the technical details, how we did that and the journey towards where we are today. All of that would not have been possible without having a project funding by NLNet and turning the EU Horizon 2020 funding for this project. And of course my company, Alotropia, also helping to further that and inspiring the development and the infrastructure and the people. Libro is in a browser. That's been the theory so far. Now let's jump into the practice. Let's take a look at the state that we have today. For that I've recorded a little demo that we will get into now in a second. Let me switch over. Okay, so let's see. What you see here is a standard Chromium back browser with Libro has been served from local hosts. Essentially you can serve that from any server in the world. It's a static binary that gets downloaded once, then it's cached and it's essentially all, every single bit is running locally. So you don't need any server beyond once getting the bits served. But it's running natively in your browser. And yeah, right now you're getting the full desktop experience. That's what you're looking at here. It's essentially Libro Office Master as it looks and feels today. There's ups and downs to that. For the demo I think it's just great because it gives you a lot of control and a lot of features. Going forward we're probably going to do something that often more of a web-like UI here. What we can see is it's blazing really fast. You've got all the features, all the layouts, all the objects, everything. You've got the sidebar, context sensitive. You can touch up everything. You can select everything. The document layouts as it would in the desktop or in the mobile version. So there's absolutely no difference. It's exactly, precisely 100% the same code and the same functionality. There's a bit of more complex table layout here. Like multi-lined columns and editing and control. A bit of undo so we go back. What you see here is some built-in image editing from Libro Office. It's also there, like again, no limits in what you can do with your content. There's absolutely nothing in terms of differences. So it's not as, for other products that you would be limited in what you can. Maybe you can see everything but you can't edit everything. Or if you can see and edit everything and you cannot insert certain things because features are missing. That is not the case. Right now lots of different draw shapes being inserted in this text document here. What's also great, you have full control over charts. So not only can you see them, you can touch them up and you can change them to your heart's content. Especially when it comes to 3D features. Pretty nice here as you can see. Switching to different types and then going 3D. All of that in a browser, all of that's going to look exactly like that in the desktop version. With the same functionality, the same feature set. Fundwork, just the same. Also quite a nice feature. Also rendering exactly as it were in the desktop version. Some little features here with shapes and text and shapes. Also no difference like you would somehow emulate the editing and then only get the busy week after a round trip on the server. Who renders it for you. No, no, it's getting processed, rendered, controlled, edited and inserted right here in your local browser. There's no lag. It's fully offline capable. So there's no need to do any round trips with any network resource. Everything, the full functionality is right here in the browser. Yes, quite some choice here in terms of extra functionality. You can access that. You can configure your user interface. Whether that makes, in general, that makes sense for quick web editing that remains to be seen. But you have the option that's always great to have options. Then you decide if you don't need that, you can disable it. But for the power users, it's all there. Quite some complex shape editing, good control. Text frames are a bit of a desk for publishing features. So that's a bit of a page on top of a page. So you can have the same editing text control that you have in a writer document also inside this frame. Again, all the features are available as they're used to for Modesta version. These 3D shapes, I'm not sure whether people, this is a widely known feature at all, but it's pretty cool. Very nice and graphic. And again, the full control is available here. All the choices, all the dialogs that you have. Yeah, it's cool to show off the features and also the speed. That's a software renderer and built into the liberal office that does that. Right. And here we are with a nice semi translucent arrow. Thanks for that. And now let's move on to a bit of an outlook. What's going to come. Thank you. Okay, so what's up next? Let's update the project plan. Obviously, we need to focus on JavaScript now with most of the groundwork done. What's up next clearly is make this nicely embeddable and work on the GUI. The desktop GUI that you've seen, and that's going to stay very likely, but just as one of the options. So what we really need and want is something much more lightweight. That also is easy to embed is this native native JavaScript. But I have that as an option. The second larger bit is as you already mentioned is we need to use much more browser APIs right now. We pay the price of shipping all those libraries that are already in the browsers like ICU, like rendering, dictionary stuff. That we really, we don't need that. So the only reason we need that is that was the fastest way to get the demo working. I really want to port that to use native JavaScript APIs. And ICU is probably the largest bit that we carry and also queued. Although that remains to be seen how much we can strip that one down. We also need to focus on getting some end-to-end encryption editing session going. We plan to do that now by summer. We tend to push this out. But we're now hopeful that we can make that with essentially everything working that we wanted to have working last year. And finally, we really want to have something like a minimum viable product out by autumn this year. So that you can already take that and that and use that for rich text editing in a real world application. Nothing much else I think is possible until then. So especially a real-time collaboration I think that's quite a bit out still for having a document and edit that and save it back. That should be doable by the autumn of this year. What to expect? So what's our vision? How do we plan to productise all of this? This is not going to be a replacement for desktop or mobile LibreOffice. If you can install LibreOffice locally on your machine and just go for it. That's more lightweight. That's probably much better platform integration than anything that runs in a browser. It's also not going to be a replacement for Collaboror Online. That's really a different product with quite different needs that it serves. For example, for excellent real-time collaboration. Instead, what the plan is, we will be serving on needs. So assume the platform is the browser. Then you might need something for touching up a letter before it goes out in your customer relationship management. Or you want to edit some templates before it's going into some background PDF production. And yes, everything works text-budget. So everything that works on the desktop that works now in the browser in terms of very, very, very complex document editing. Or let's assume your platform is the browser, but you want it to be privacy by default. Or you need end-to-end encryption. So anything that would go via a server, even if it's a server controlled by you, but most of the time you as a customer will be using a software as a server solution. And it will not be your server. Then you have a solution here that doesn't send any data ever to any server because it's all local. And you can of course run it peer-to-peer between a number of browser sessions. Or if you need massive scale for your product, but you don't have the millions of servers that you would need if there would be any server-based application. So here's something that really scales like a static website because it's just that. It's some bits that the browser needs to download initially, and then it gets cached at the edge or even in the browser. And you never pay for any cycles or any bandwidth anymore. So if you want to play with that, then you would be most welcome to look at our demo setup there. Just be aware that this is going to run Nike builds, so it might be broken. But then just check the next day. Usually we update that frequently and we see that it's running most of the time. We would also love to hear your feedback, what you think. There's going to be a Q&A session after this. But you can also reach us at any time on the contact emails from the first slide. We're also hanging out on the LibreOfficeDev IOC channel on LibreChat. We'd love to hear what you think. And also if you're interested in helping us there, hit us up. Or if you're interested in using that as a product or as part of your offering, I would be very curious to hear about that as well. Okay, thank you very much. And let's head over to Q&A. Okay, so I think we're live for the Q&A. So there were a few questions in the chat regarding the asynchronous dialogue we work. So quite a lot of that work has been done already, but there's still reams and reams of smaller dialogues that you will easily run into when you use the desktop UI. So that's quite, and it's kind of repetitive work. So there's still work to do. Then there was a question about memory profile, especially on low-end or mobile devices. Maybe Jan-Marc, something you can answer? Are you muted? Okay, so I'm taking the question then, since Jan-Marc is muted. So I'll go for it. I've already written in the chat the version we are compiling works with 1GB and 512MB is not enough. So what you're normally running is with 1GB pre-allocated memory, and it doesn't increase the memory currently. That's the version you can normally work with, and it's working fine. Okay, next question from the chat was about performance improvements. I think those were largely any kind of fixes, improvements, shedding extra sizes of the desktop version, of the LibreOffice Core version, benefits also the WebAssembly version. The Allocit specific changes are not having any direct impact on WebAssembly as of today. Then there was a question about language tool, so Java-based extensions. That's probably going to be a challenge for Marc, what do you think? Well, if I remember correctly, there's already a Java virtual machine written in WebAssembly. So you can run your Java stuff on those WebAssembly Java virtual machines, and in theory, you probably could run that stuff together, but that's some experimental stuff, I guess, for somebody who wants to spend some weeks on it. Yeah, and I think broadly, I mean, the sky's the limit, but I think probably there's enough innovation, enough stuff going on in the browser web-only arena, so that's probably something I would look first. The challenge internally is that we would probably need to get the Unibredge properly working, which we kind of sidestep right now. There's another question about Qt6. Marc, you did try that, didn't you? I did try to compile the two WebAssembly, but I gave up after two days, so in theory, it just works, because Qt5 and Qt6 backends are the same. And so they do not need any additional patches for WebAssembly, so it should work. But I just cherry-picked a few patches to our Qt5 branch from Qt6, and from what I see in the Qt6 wasm stuff, and there it's as broken as Qt5.
LOWA - LibreOffice WebAssembly. Most recent updates, working code, and ample stories of how we got to have LibreOffice run natively in a browser.
10.5446/56880 (DOI)
you Hi, my name is Enrique Castro. I'm from Bolivia, which is located in South America. Today we are going to explain how the pixel basic script for macros are executed on a server side using the web application like Colobor online. Next, we are going to analyze how this macro selector dialogue is implemented in the library of this core and just selected the pixel basic macro, how this is executed, of course. Then after we are going to explain how it's implemented, the same dialogue in the client side using the JavaScript. In the drop-up of the core, there is a Unicomand called wrong macro. This Unicomand is triggered by some user interaction like a menu. Once it's triggered, the Unif framework has dispatched this command to a specific process where a macro selector dialogue is created. When the user interacts with the macro and selects the specific pixel basic, finally, it's the entry point where the macro is executed is called call script. Here is the postcode of the wrong macro. There is the path of the source code and the entry point where it is executed. If you scroll a little bit, you will see that all the macro selector dialogue is created and executed. Finally, the macro selector dialogue is shown and the user can interact with this macro to select the pixel basic script to run it. The entry point to run the macro is called this cost-created method. We will see later that this is the main process and library of this code to execute the macro. If we make an abstraction of the workflow, we can see in that diagram that Unicomand is triggered, then the macro selector dialogue is shown. Finally, the input parameter script is passed to the main process. Let's go run script. Of course, if we observe the diagram, we cut this far. The main important in the development score is just this process and just provide an input script name of the pixel basic to execute. What happens if we want to run the macro on a remote location? The same here is a client application, web application like ColorWareOnline. Here we have a library of this code with the main process. Let's say the main engine run the script. Both are connected to a communication socket via. If we send the same data, the script name, as an input parameter, the macro will be executed on server side. On client side, the ColorWareOnline requires to create an HTML macro selector that as a user free on the interface to interact and select the pixel based macro to run. In order to create the HTML dialogues on a client side, we have several methods. We all discard these two methods and we all focus on only creating the dialogue dynamically. Here you can see search code or macro selector dialogue, the class constructor here. You can inspect that this code in the library of this code is not changed at all. The changes is we dump the dialogue as a property tree. You can see the properties that are dumping to JSON data and sending back to the client side. Again, if we make that fraction and we have the server side of the library of this key process, once the macro selector dialogue is created, the instant to show the dialogue is just dumped as a property tree. JSON data is sent to the client side. Once the web application like ColorWareOnline uses the data to create an HTML macro selector dialogue. Here is the main entry point on client side when it receives the JSON data from server to build the macro selector dialogue dynamically. Finally, you can see here the sample of how this dialogue is built. You can see the CSS style is applied. It can be customized. However, there was some problems creating dialogues on client side. For example, the server sent all the data again for each user interaction like a quick button. The dialogue is recreated again completely. So I have to prove that just to the partial update, it means that the server side will only send the invalidated data to the client side to just update this region. I will provide you some hints to potential more improves. Here is the source code when the client side receives the partial update. All this only in this example, a control dialogue is created and not the complete dialogue. Here is the big picture of how this macro executes. We have the client side here with the ColorWareOnline web application on the server side with the LibreOfficeKit process, the communication channel, the socket that sends the unit command. When the LibreOfficeKit process receives the unit command, it creates a shallow dialogue. This is a bunch of data structures. By the way, it is not rendered. They send the JSON data to the client side and this creates this HTML dialogue. Both are connected and the user interaction communication begins. There is a quick button here and the server will respond with the new data. Finally, the user wants to run a macro. Just need to click the run button. We just send it to the logic dialogue on server side. Finally, like we see before, in our extraction lawyer diagram, we will see exactly the same. It was in the LibreOffice score and this data is passed to the main brown script engine to execute the macro. Thank you very much.
The implementation of a Macro Selector Dialog on client side to execute VBA macros on the server side.
10.5446/56888 (DOI)
Hi there. Today I'm going to be talking about decentralized collaborative annotations using Matrix. So on the agenda today is a demonstration of a tool, Matrix Highlight, that I've built, which lets you highlight the web collaboratively using Matrix. Then a discussion of why I liked building on Matrix and why I think you might want to consider doing the same. And finally, I'm going to talk about using Matrix for more than messaging. So first of all, a quick demonstration. Alright, time for a quick demonstration. So what I have here are two Chrome windows, both of which are logged into Matrix Highlight but with different accounts. That way I can give you an overview of what it looks like when two users are going at it at the same time. And so what the extension gives you is the ability to right click any page and click highlight with Matrix, which brings up a tooltip that hovers over anything on the page. And this tooltip, actually let me do this first on both sides. It has a bunch of things, but the first thing you need to do to start using the tool is to create a Matrix room that corresponds to your page. I will do that really quickly. And what I'm also going to do is click users here and invite the user on the left so that we can collaborate on this page together. I think this is it. This is their Matrix username. And I will join it on the other side. Right. So now I have two pages logged into the same room. And I can start highlighting. So I can select some text, which gives me a tooltip, and I can clear color and get it. And it appears on both screens. In fact, I can refresh the page, highlight using Matrix again, and it's still there. This is persistent and it's stored on Matrix. I can do this a couple more times. And I like to use different colors because it's prettier that way. But you can definitely stick to whatever you prefer. And of course, what I can also do is click one of these things and make a comment. Fascinating. These comments show up in real time on the other person's end. I just didn't have them open. You're right. And then of course, these two things are synchronized. There's some minor support for text editing. Bold, italics, code. And of course, you can also look through a list of all the quotes that have been highlighted so far. This term, I'm a TA for Organic State Universities. That's the first sentence of this article, and so on and so forth. So why did I choose to build my tool on top of Matrix? There's many good reasons. For one, Synapse is an established piece of tooling for decentralized federated communication. Users could right now install this piece of software and communicate with everyone else on the Federation if they so chose. And you could self-host it. That's just fine. Furthermore, Matrix as a protocol has support for many different things that come up frequently in various use cases. You can send and receive files. You can relate events to each other. One use case for which is threads. You can organize rooms into spaces and spaces into spaces. And it's all very deep. And there's also support for Android encryption. I think we can all agree that providing users privacy is a good thing. And of course, Element itself actually has debugging tools. So I can send custom events without implementing a client myself, and I can view event sources that I receive, and I can browse room state and do all sorts of things that I as a developer can really appreciate. And now we come to how I did it, or Matrix for more than messaging. So take a look at this regular old Matrix event I plucked from one of my rooms. There's a lot of things here, and only two of them matter for us, the type of the event and the content. So the type denotes basically the sort of thing the event represents. In this case, it's a message. And the content, the contents of which usually depend on the type, in this case is the text from the message. I think that's a good idea. The beauty of Matrix though is that you can make your own message types and fill their contents with pretty much whatever you want. And that's exactly what I do. I send a custom highlight event whenever a user selects a piece of text and clicks a color. And this event has five keys, highlight text, which is the text, computer science student in this case. The beginning and end position of the highlight and the color. And replies to these highlights are actually just plain old regular Matrix messages that have an M dot relates to thing in their content, which is actually built into Matrix and it's part of the threading API. So these are actually just threads. And I think the beauty of Matrix is that you can pretty much represent anything in Matrix that can be represented using a graph because events are just nodes and various relations are arrows between them. So if your piece of data can be represented with this and most likely can, you can leverage Matrix to create a cool piece of software like Matrix Highlight. Thanks for watching.
This talk showcases Matrix Highlight, a tool built on top of the Matrix protocol to collaboratively annotate and comment on pages on the internet. By building on top of Matrix, we get decentralized, federated, and open web (or more) annotation. This talk will cover a demonstration, using Matrix events for non-messaging purposes, and the benefits of building on Matrix.
10.5446/56889 (DOI)
Hello. Hi. Welcome to my talk. My name is Shay. And this is Events for the Uninitiated, a junior's guide to events in Matrix. So I have never done one of these pre-recorded talks before. I'm working with some new technologies. So I'm working with some new technology. So I'm working with some new technology. So I'm working with some new technology. So I'm working with some new technology. So I'm working with some new technology. So please just bear with me. Hopefully I'll make it through. But anyways, welcome. And I just want to say a few things about myself. My name is Shay. I am a junior engineer at Elements. I mostly work on Synapse, which is a Matrix home server implementation. I've been at Elements for about three months. And this is my first software engineering job. So I'm sort of new. And I thought I would use sort of my new key to the future. So I'm working with some new technology. I thought I would use sort of my new curious energy to share some of the most recent information I've learned with anybody who's interested. Hence this talk. So I decided to do this talk for a couple reasons. The first one is I'm curious and I found the best way to learn something is to put yourself in a position to have to explain it to somebody else. The second was I wanted to make the spec the matrix specification more approachable I found it very intimidating when I first started. It is a large document with a lot of information. But as I sort of got to know it, I found that actually it's very useful. There's a lot of really cool info in there and it's really worth digging into. So I thought I'd sort of set myself up as a friendly guide. And events are central to matrix applications. So it's a good idea to just have some understanding of what they are and how they function in a matrix application. So this talk is for the curious. It's for newer developers or anybody who has an interest in hacking on the matrix hacking on applications in the matrix ecosystem. Hopefully it'll just give you a better idea of what's going on. And so what I hope you walk away from this with is a better mental model of how matrix applications function and a greater willingness to engage with the spec. One thing I will say is that software can be a little bit tricky to talk about. It's sometimes very hard to know what layer of abstraction to talk about it at. And it's very difficult to talk about. Use language that's very precise without falling into jargon and obtuseness. So I'm really going to do my best to be precise but be accessible. So I'm going to give it my best shot. And when talking about events, the way that I want to break that down is that I want to talk about them in the micro. So we'll just take a look at how they're structured and how they function. And then we'll look at where they fit into the larger context of the matrix application. A little bit less time on that. More just a little bit more inform and structure. So the first thing I'd like to do is I'm going to introduce the spec which is the matrix specification. You can see and there's a couple things that I want to talk about with this. First off is make sure that you're looking at version 1.1 which is the newest version. Second, there's a couple different APIs here. So there's a client server API, there's a server server API, application service, identity service, push gateway, etc. So we're going to focus mostly on the client server API, but just be aware that there's more information in here. And the second thing I wanted to highlight was before you if you go on the landing page and you go to the third section there's a talk on architecture. It sort of talks about the overall architecture of the matrix protocol and it's really very useful. I found myself referring to it a lot. It's a good thing to just take a look at this whole sort of beginning really this section 3 is very useful. So let's just just showing a little bit of how what's so exciting about, or not exciting but like what's interesting about the spec and just taking a first look at it. So okay. So the matrix defines APIs for synchronizing JSON objects known as events which is basically sort of a fancy way of saying it is a bunch of rules and definitions from which you can derive a matrix application. So you just sort of add implementation and stir. So like I said I work on Synapse which is an implementation of a home server. There are other versions of home servers. I think conduit is one of them. There's many implementations of clients. So fluffy chat element and because they adhere to the specification, they're all able to communicate with each other. So a conduit home server and a matrix home server can talk to each other and the client, you know, an element can talk to any of those home servers. This is one of the things that's very cool about a matrix specification. And one of the other things that I really encourage people to do is to look at the specification and then look at how it's implemented in a specific implementation. I use Synapse. You can use Synapse. It is open source. You can go to GitHub. You can look at the code. You can kind of toggle back and forth between the specification and the code. And it's a really great way to sort of get familiar with the code and what it's trying to do and also helps the two sort of inform each other. And I've learned a lot from doing that. So that's enough about the spec. Let's move on to events. So events, the events you've been waiting for. So all data exchanged over matrix is expressed as an event. So you can see obviously events are very important. Most of what we're doing is exchanging data. So that's what events are doing at their sort of base level. It's a JSON object. And so there's many, many different types of events. What fields an event has is determined by the type of the event and may also be affected by room version. But they are JSON objects. So it's just most of what's different between different types is what fields they have and what function they provide in the ecosystem. And so most events are exchanged in the context of a room. So as I said, several different factors determine form including what type of event it is and what version the room is and the type sort of determines function. So we'll take a look at one at an event very soon. But basically, many, you know, there are most things that happen in the matrix application are spurred by an event. So an event could be anything from sending a message into a room. It could be events create rooms, events they, you know, could change the who's the admin of a room, you know, all sorts of things like that. And so let's talk a little bit. So we've talked about events, let's talk about rooms. So rooms are a conceptual place where users can send and receive events. So they exist on a home server and they exist across home servers. So these rooms have no central point of control. So if I create a room on matrix.org and another home server joins that room, if the room if matrix.org goes down that room still exists. There's no central point of control and these rooms are shared across the home servers and participate in them, which is one of the main sort of decentralized aspects of matrix. And again, one of the things that I found that it's very cool. So you can see a room identifier usually has an exclamation point at the beginning and there's a room ID and then a domain. The domain is the home server in which the room was created on. So an example is you see this non-human readable string here exclamation point that's on matrix.org or synapse-dab matrix.org, which is the room where we discuss development on synapse and I encourage people if you're interested in this to join that room and start asking questions. People are generally pretty friendly there. So this is sort of a general overview. It's not I would say it's it's lacking in detail, but it's a good sort of view of sort of what happens when an event is sent. So we have a user Alice at matrix.org this user makes a HTTP request to a home server so it makes they send, you know, I want to make this request I have this room ID and I want to send an m.room.message event into the home server and the content is encapsulated in this JSON object so send it to the home server at matrix.org there's some authentication and some checks that happen there just to make sure that this person is allowed to send this message you know there's they're a member of the room, you know, things like that the home server then persists the event so it takes and we'll talk about this a little bit more later but it takes the event and puts it into its room graph and adds it to sort of the the list or the list of events that have happened in the context of that room and then it creates another HTTP request to another home server that is a member of this room so you can see we have once again important information in the room ID we have the event type m.room.message we have the content and once again encapsulated in that JSON object as I said when event is persisted in the room graph one of the things that's added to it is a pointer to the preceding message in the room so there's usually a list of the most recent messages in a room and that next event then references either one or several messages that immediately precede it and that's where it gets its place in the room graph which we'll talk a little bit about that more later you'll see here we also have a public key infrastructure signature from matrix.org this home server so that's just a fancy way of saying it uses some fancy math to mathematically verify to this other home server that it is matrix.org and not some random malicious sender and you see we have this shared data that ends up being shared between both home servers so we have the room ID each server is aware that the other server is a member of that room so that shared data each server is aware that Alice is a member of the room and is aware that Bob is a member of the room and so we have that shared data and also the content is shared and it's persisted on both home servers so then we it gets to the second home server a bunch of checks happen here too which we'll talk a little bit more in detail later but the home server the receiving home server basically authenticates that A the method the event is valid and B this person has the right to send this event there's a couple different things that happen there and then if it's that's all good that event is persisted in the second home servers graph and then the event is then sent to on to Bob at whatever client they're using so that's just sort of a basic I would say for this level I didn't go too deep into detail but like a little sort of overview of what exactly is happening and so this is you know one of the things to kind of keep in mind is that you know this is a fairly toy example but if you can imagine there's hundreds or hundreds of thousands of requests coming to a home server every minute and then home servers are then exchanging those between each other so the volume can get very high and it's something to think about in terms of like how do you manage you know managing something like this is different versus managing you know thousands and thousands of requests and that's sort of the magic of the math behind the matrix protocol that is able to handle that so there's many types of events you know there's m.room.message there's m.room.name there's m.call.answer there's m.typing there's m.receipts there's m.presence m.tag m.login ssso so all of these events basically do different things in the context of a matrix application so m.room.message sends a message into a room m.room.name changes the name of the room m.typing sends a typing receipt into a room that other people on other home servers can see the key verification request has to do with requesting encryption keys the login logging in with ssso a single sign on so there's you know basically kind of in as we said since all data exchanged over matrix as an event all sort of requests to do something or change something or alter data in a home server comes from an event the other thing you may notice if you are astute is that there you know this m.here and that just means that this type of event was outlined in the matrix specification but there are types of events you can create your own type of event and the matrix specification has information on how to do that so that m designation just means this is an event that is actually specified in the specification but it does not preclude other events and only events that are specified in the spec are allowed to use that sort of m that m beginning so let's see what else did I want to say so let's talk about a room event so I just wanted to give an example of you know when we talk about fields what are the fields so we have the content which is and it also has the type here so that's helpful for if you're trying to implement it or if you're debugging you can look at the spec and say hey what should this be is this the correct thing is it not is this the best way to implement this so we have content which is an object this is required field and lots of different things could go into the content we have an event ID which is a globally unique event identifier so when a request is accepted and event is generated there is a unique ID that is assigned to that event so that means that no other events will have that ID and that ID can be uniquely identified by that string we have an origin server time stamp so that's a time stamp in milliseconds in the originating home server when this event was sent so the first home server from which this event comes has the time stamp and then we have a room ID which you've seen it's the room associated with the event I'm sorry that's my dog in the background making noise and then we have the sender and that refers to the user who sent the request and then we have the type so we've talked about this the type of event so they have different types their type refers to their functionality and there's lots of different ones you can create your own as I said before then we have this field unsigned data and it contains optional extra information about the event the unsigned data is not required all of those other fields are required if you send an event without those required fields somebody very soon will reject it it'll get lost in the ether and in unsigned data you have age which is the time in milliseconds has elapsed since the event was sent so you have this field redacted because so events can be redacted so that the content can't be seen sometimes that happens sometimes that what doesn't and then you have a transaction ID so that sort of gives you sort of like a little bit of an overview of sort of all events have different fields once you start looking at a bunch of them you'll see that some of the fields are very consistent obviously all of them have an event ID all of most of them I think have room IDs that sort of thing so they do vary but this is sort of a good way to sort of get a base sense of it and this is what an event actually looks like so this is the JSON and we have content we have and if we toggle back and forth you'll see all of the fields that we said are required are here so we have the event ID we have the origin server time stamp we have the room ID we have the type we have the unsigned field so that's just sort of what an event looks like it gives you a sense of sort of what's the what is the sort of underlying fabric of an Emitrix ecosystem and that is those events flying around all referring to different functions that they want to achieve and carrying little bits of data that relate to the function that they're trying to achieve so this is an important thing to know don't ever trust an event as a client you can't require you can't basically assume that the server is verifying and validate that the server will validate for you there is more in this at this at this you are a little here that you can read more about it there's actually a very great write up on why events can't be trusted so that just means don't raw, don't ingest data raw from an event if you're a client okay and so then we'll talk a little bit more so we've talked about sort of like the structure and function of events and then I want to talk a little bit more about how they are sort of put into the overall structure of a matrix application there are server server events so we've talked mostly about client server we talked a little bit about server server server server events are a little bit different they're called persistent data units or PDUs or EDUs which are ephemeral data units so persistent data units are exchanged between home servers in the same room things like message and state events and restored home servers and they reference the most recent past events. Ephemeral data units are exchanged but they're not stored and so when we talk about stored I spoke about this a little bit when I was going over the graph but we're talking about them being persisted in the events in the rooms event graph and so I want to talk a little bit more about graphs so you've heard this you'll probably see it more in the specification but rooms are structured by they basically are put into a directed acyclic graph so a directed acyclic graph if you were not unfortunate enough to get a degree in computer science you probably have not heard this but a directed acyclic graph is basically a graph whose nodes so these could be considered the nodes have edges which are directed so they basically point in one direction and it is a graph that doesn't have a cycle so a cycle is can I start at one node follow the edges and then end back up at the same node you can't do this if there for some reason were a directed edge from this node here to this one that would be a acyclic graph so it's very important that there's not a cycle there's a definite there's a definite structure to it and one of the things that is sort of important about this is basically the ordering determines the state of the room so if you consider that there are events that can do things like add users or ban users or things like that the overall state of the room very much depends on which event came before the event how events sort of came in time which is a little bit difficult when you're considering that events are going to different home servers home servers are exchanging events amongst each other this is all sort of happening at once so the way that major sort of deals with this is to put events into a DAG and when necessary will order the DAG very specifically and there's an algorithm for resolving the state of the room and this algorithm depends on the room versions so there's been a number of different versions of the room space and different rules are outlined in different versions of rooms so it's a little bit complicated but it's actually when you start reading about it it kind of makes a lot of sense it's basically how do we what set of rules do we use to determine strictly which event came before which event and so you can see in this one this little sort of pseudo room graph that I have here we have the very first event which is an m.room.create event and then we have this m.room.message will skip servers joining and things like that so assume people have joined assume it's okay that they're in the room so they send a message into the room and they reference that most recent event and so then we have these two other messages and both are sent into the room but one is sent to one home server and one is sent to another home server and they're sent at basically sort of the same time so they both reference this last message as the most recent event and that's acceptable and this is why you can this sort of situation where okay both of these events have referenced you know they reference the past event that's totally acceptable how do we strictly determine which one comes before the other one that's where the room state resolution sort of comes in so I think I'm running a little bit out of time but hopefully this will give you sort of it gets a little bit the room you know the room state resolution algorithms in particular are a little bit hairy to get into but I think I'll save those those are they're interested to look at and it's a cool thing you know if you want to talk about it coming into some of that stuff start asking questions I'd be happy to answer whatever questions that I can and yeah so hopefully that's pretty much it for me hopefully this was useful like I said please feel free to ask questions and yeah thanks for coming to my talk Cool tip which is can I send any kind of event in a matrix room do they need to exist in the spec can I experiment with my own events and once I'm done developing something with those new events how what can I do so that everyone knows what to do with them yeah so you can send your own experimental events into a makeshift room they don't need to exist in the spec and so you can play around with that and if you you know you can work to do some experiments and if you come up with something that you think is really useful and could benefit the matrix community you can make a request to have that event type added to the matrix core specification or the MSC so there's a whole process for adding and you know you make us you make a proposal and the spec routine takes a look at it there's usually some back and forth and then you know if all goes well your event type or any other actual change to the spec that you might want to suggest gets merged into the spec and your work lives on so great yeah flexibility is definitely one of the one of matrix's main points talking about matrix it's very well known currently for instant messaging but can it be used for something else mm-hmm yeah so I think it can be used it can be used for video it can be used for you know like sensors talking to each other I heard that there is somebody who's working on building MMO on top of matrix yeah we we actually have a line in talk about that right later later today at 110 p.m. European time looks like it lists say
Events are at the heart of the Matrix Protocol, but what are they? And how does the Protocol build on them to create rooms, room graphs, and other data structures? This talk aims to demystify some of these concepts, giving an overview of events, room event graphs, and associated structures. In the spirit of accessiblity, there will be an attempt to make the talk as approachable as possible for those without a ton of programming experience (although some basic knowledge will be helpful!). We will look at the Matrix specification and if time permits, may even dig into some code in Synapse. If you're interested in hacking on Matrix/Synapse but are newish to programming or intimidated by the core spec, or are just curious, this talk is for you!
10.5446/56892 (DOI)
Hello, I'm Jorick, also known as David. I'm a Tech Lead at Element. I'm working on moderation tools for the Matrix Network. And I'm going to talk to you about moderation tools for the Matrix Network. So as you may have heard, Matrix is a federated distributed network that features, among other things, chat. There are some people around the net who seem to believe that it's impossible to do any kind of moderation on a federated network. So I'd like to show you some of the tools that are available to everyone to do moderation on a federated network right now. Before we do that, let's say a few words about moderation. Moderation is something that's defined by rules. In some rooms, say, you want to talk about politics and not sport. In some rooms, you want to talk about sport and not politics. In all rooms, you want to avoid being insulted and being spammed. Matrix itself is a protocol. It doesn't know anything about those rules. It's human beings who know about the rules. So what you can do is use a number of tools to block people and prevent people from doing any kind of such harm. A typical scenario of moderation is one in which users such as Alice Bob and Carla are happily chatting in a room and then comes in Marvin and Marvin wants to sell them all stinky French garlic spam. Nobody wants stinky French garlic spam, maybe. So what can they do from this point to avoid being inundated with stinky French garlic spam? We see several ways to approach this problem and starting with what can room moderators do and what are room moderators? As I mentioned, matrix is federated. Already precisely, matrix is, in some way, a distributed federated, appended only log of events. And everything, almost, is an event on this log. Which means when Alice creates the room, when Bob and Carla join the room, when Alice sends a message or changes her display name, everything of this is at least one message. Whenever Alice does something in the room, Alice sends an event to her home server. The home server performs a number of security checks. Is Alice really a member of the room? Has Alice been muted from that room? No, so then she can send things, probably. And then the home server propagates the event to other home servers that are part of that same room and then each home server sends the messages, after again security checks, to the members of the room, to the clients for each member of the room so that they can display accordingly, for instance, the new display name for Alice. So messages look like this. Look like bunch of JSON. In this specific case, well, messages have a type, a sender. In that specific case, there is a display name of Alice. There is an event ID, et cetera. Now one of these events is really important for our conversation. It's power levels. Power levels are part of the configuration of the room, but that everything else, they are events. A power event associates a number to each user. Here Alice has a number of 100 in that room. Good. Other users have a power level of zero. That's the minimum, 100 being the maximum. And what do those numbers do? Well they give you the right to perform a number of actions. For instance, if you have a power level greater than 50, you can ban or kick or redact. If you have a power level of zero, you can invite. If you have a power level of 100, you can change how encryption works, et cetera. Changing power levels itself requires a sufficiently high power level. So once users have the right power level, here Alice has a power level of 100. That's higher than the minimal power level to be able to kick or ban. So she's going to be able to kick or ban Marvin. So you will probably not be surprised to see that kicking or banning are events. That's Alice sending a kick against Marvin. So Marvin is going to leave the room. As previously, Alice sends the event to her home server. Home server checks that she has the correct power level in that specific room, as she does the home server updates her list, its list of members of the room, sends the, propagates the event to other home servers who perform the same checks, update similarly the list of members in the room, and then send those messages back to users so that they can show that the user, that is, Marvin has been kicked. After having dealt with the immediate threat of Marvin sending spam, the next step is removing spam. This is called redaction. Any user can delete, that is, redact their old messages. And if you have a sufficiently high power level, you can redact the messages of other users in that room. That's again an event which specifies which message is being redacted and why we are redacting it and who redacted it. And the message for new users try to download the history of the room because they're connecting or they want new updates. We are going to see a message that is empty and that has been redacted because, well, for the reason that has been provided earlier. Again, this is propagated using the previous mechanisms for sending events across the Federation. One more thing that the moderator could do is set up a server ACL. Maybe Marvin is coming from a bad home server that serves specifically to send spam. In that case, an ACL, so a server access control list set in that room, will inform all the home servers that the room is not interested in receiving messages from that evil.com server. Similarly, those ACLs can be used for security reasons. Maybe you're dealing with confidential data and you want to be sure that nobody invites someone from the outside accidentally. In that case, you want maybe to say that only the local home server can access this room. That can be very useful. All these tools are very powerful, but there are many things that unfortunately they cannot do. In those cases, we need to leave the convenient nice federated world of room moderation and enter the much less convenient word of home server administration. Well, unfortunately, home server administration is not standard. It's a set of APIs that are defined by home server implementations. I don't know how all of them work. I can tell you about the Synapse implementation. I'd be surprised if other home servers were very different. But that's a big limitation. So in Synapse, if you want to become an administrator, either you are in the process of setting up Synapse, you can do some shared secret shenanigans and become an administrator. Or if there is already an administrator, they can turn someone else into an administrator. Once you're an administrator, you have access to these server administration APIs. They're all documented on the documentation of Synapse. So I'm not going to walk through all of them. I'm going to show you some of the key APIs that are very useful for moderation purposes. Typically, two of them are about deactivating accounts. You want to get rid of, maybe of Marvin, maybe not just in this room, but if Marvin is a spam bot, you want to get rid of Marvin on the entire home server. And either you want to inform Marvin that the account has been terminated, that's deactivating it, or you want to hide this from Marvin to prevent them from spam, to prevent them from spawning a new spam bot. You may want to quarantine media. If some images typically or movie has been uploaded to your server and it's something really ugly or something illegal in your country, there are chances that you do not want anybody to download it from your server. You could have lots of trouble with the law if you let people download illegal stuff from your server. You can also block or delete room, but I should specify that this is again not nicely federated even based things. These are all local APIs. When you deactivate an account, you deactivate this account on your server. If it's an account from another home server, that has no effect. If you quarantine media, you can only prevent users from downloading it from your home server. If it's on another home server, nothing happens. If you block or delete room, you prevent your users from being part of the conversation. But if another home server is part of the conversation already, this is going to have no effect on them. There is one more critical form of duration, API, that is home server admin only. That is a big problem, but I'm going to show you later why it's not that bad. It's abuse reporting. So if Clara, when saying that Marvin is spamming everyone, tries to call for help, by clicking on the report button. If she knows that Alice is the moderator and she can get in touch with Alice, very well. But let's say that she cannot and that she's rather clicking on the report button. What happens is that the home server administrator for Clara's home server is going to be informed of the problem. So there are two reasons for which that's actually not what you want. The first reason is that this administrator is typically not a member of the room in which all of these bad things happen. Since they're not a member of the room, unless the room is public, well, this administrator is not going to be able to see whether anything has happened. And the administrator doesn't know whether Marvin is malicious or whether Clara is malicious. And so the administrator cannot take any useful action. Moreover, the administrator is not part of the room. So the administrator cannot actually kick, ban, mute, redact in that room. So typically, the wrong person is receiving this abuse report. That's a problem. And we're going to see as soon as we can solve it. And to give you more detail, this is what a user report looks like. It gets you an event ID and a room ID, which are useful if you are already a member of the room and can take a look at the content, completely useless otherwise, a reason provided by Clara, and a user ID and a date. That's not nearly sufficient to take action. Before showing you more details about that, I'm going to mention something else about HomeServer administration. If you have access to the HomeServer itself, you can go beyond administration by inserting PlugableModules. Synapse is highly scriptable. You can write PlugableModules to do many kinds of things, including enabling or disabling registrations based on arbitrary reasons, such as whether the registration comes from a well known to be bad IP, for instance, or keywords, anything. You can deal with federation with servers, federate, defederation, etc. You can inspect the contents of messages to reject links that are known to be bad, or messages that contain bad keywords, etc. But all of this is subject to the same limitations mentioned previously. The server does not have access to the content of encrypted rows. So that's a huge limitation. And of course, other limitation is it's not federated. Again, it's HomeServer admin style. So this only has an impact on your home server. So a part of the solution to the problems we mentioned is called Mjölnir. Mjölnir is a bot dedicated to helping moderators and to exposing moderation features in a convenient way. Well, setting up Mjölnir can take a few steps. You have to deploy it, create a moderation room, invite Mjölnir to the moderation room, invite Mjölnir to the community rooms, make Mjölnir a moderator in the community rooms, optionally make it admin, optionally set up reverse proxy. But what's that is done, you have access to a whole host of features. This is just a subset of the features. If you want to have the full list, please run Mjölnir help. But to summarize, Mjölnir will help you. It gathers a number of rooms, sorry, the moderation of a number of rooms behind a moderation room. So any member of that moderation room, no has access to the Mjölnir. There is the moderation tools for the other rooms. So any member of the moderation room can kick-band, redact, ignore, mute, etc. Bad users from any of the moderated rooms, which is highly convenient. It also lets you set up room protections such as not letting new users upload images, for instance, or sounds, etc. You can implement room policy lists, which are basically supercharged server ACLs. If Mjölnir is an administrator, you can use it also to use in a convenient way the administration APIs. And then there is this abuse tracker. So the abuse tracker is one of the most exciting recent features of Mjölnir. It's something that replaces the ugly JSON that you did not actually have access to, because you are not the home server administrator, by something like this. This is a human-rable abuse report. It contains all the details you need. If Mjölnir was a member of the room in which the event took place, then it even contains the content of the event that is being reported. And it even has a click-based user interface to conveniently ban mute, redact, kick, or ignore the message. This is much better than the admin-based solution, and this does not require the non-standard admin API. So even better. It's not the end of the story. I'm going to tell you more about it in a few seconds. So after having spoken of recent improvements, let me tell you of a few future additions to the ecosystem of tools. Let me tell you about the future. The future is coming. One of the things I'm most excited about at the moment, one of the things we're working on, is that we're moving towards decentralizing moderation as much as possible. Moving from something that requires home server admin APIs and home server admins to take part, to something that in a very convenient way will hopefully soon let people spawn community rooms and moderation rooms without having to deal with MuirNir configuration shenanigans, and in which attach community rooms to existing moderation rooms, and perform moderation actions from the moderation rooms, again, extending what MuirNir is doing already, but in a much more lightweight way. And every step we take in this direction is going to be useful, not just because it makes things easier, but also because it makes things more standard, and also because it makes things more compatible for the day we finally move to matrix P2P. But that's not all. We're working on many things. One of the things we're working on at the moment right now is to phase moderation, in which moderators can immediately mark something as suspect, hide it from users, and then debate whether they should remove it or reinstate it. So undo, basically. We're improving room protections and how they can be set up and customized. We're improving the user interface from MuirNir. We are working towards distributed trust by the concept of sharing policies between rooms, between servers, between users, et cetera. Hopefully, within a few months, the ecosystem of moderation will be way, way more powerful and easier to use, and I'm really excited about that. So I hope that I have convinced you that while moderated, federated communications isn't trivial, there are many tools and they can be used already to create effects. More tools are coming, and I'd like to thank you for having listened to this presentation. If you have any questions, I am all ears and happy to answer them. Thank you. This... Looks like we went live bitterly. So we have a few questions to answer. The first one and the most upvoted one is, can a malicious home server ignore a kick event and stay in the room? The... So, yes and no. The home server can try and ignore it if it's not receiving messages from the other servers. It's not going to be useful for it much. I'm going to mention that I have currently problem with sound, so please excuse me if I... What I say sounds like nonsense. So let me rephrase. Assume we have a community of home servers. One of them is malicious and decides to ignore kick events. If there is only one user from that home server and that user gets kicked or banned, the other home servers are just going to stop sending federation messages to that home server. And then, well, the user will not be able to receive anything. On the other hand, if the malicious home server has several user accounts that participate in this room, all the user accounts receive the messages and the home server can decide to maliciously forward the messages that were intended from one user to the other user. So this works because Matrix keeps a local copy of the room. So if we have another user, it's possible for the home server administrator to still read those messages and to potentially send them to the user that was kicked. We have another question, which is about Povalovs. So right now, Povalovs are numbers, I think they are supposed to be integers, but I'm not sure. I think there is a MSC to fix that in the spec. But let's say they are supposed to be integrated between zero and 100. So if the last person in the room who was PL100 demos themselves to say PL50, nobody can get PL100 anymore? So I would need to double check that. But it's not about being able to PL somebody to a number higher all over than you. It's whether you have the authorization to send a Povalovs event. So if nobody has the Povalovs sufficient to send Povalovs events, you're stuck and nobody will be able to change Povalovs. All right, so you don't have an administrator who can take over the room and put it back to the PL100 state. But how much trust do you need to have in a home server administrator? What can a home server administrator do? What can they not do? Your home server administrator can do a lot of bad things if they want to, at least in non-encrypted rooms. In non-encrypted rooms, there isn't much that they could do. In an encrypted room, they could do a few things, but that's basically joining the room and leaving the room, basically. Possibly a few minor things, et cetera. But yeah, that's an annoyance. That's a problem we have, and that's one of the reasons why we want to decrease the abilities of admin APIs, although that will not entirely solve the issue. And that's one of the reasons why we want to move towards P2P. It looks like P2P is going to solve a lot of issues. So people have really seen, seem to have appreciated the Mjolnir demo and your pronunciation of Mjolnir. I think, I don't know if I'm pronouncing it correctly. And one of the cool things about Mjolnir is those little buttons you have below. So right now, this is just reactions. Instead of reacting with an emoji, you send an emoji and a text. Is it possible? Do you think we could have proper buttons in the future? I hope so. I don't know if anybody is working on that right now. I know that we have projects to do it, but it's not the highest priority, I think. And it's a cool thing to have, but we'd rather have better moderation tools first. Talking of better moderation tools, we have a question, which is, are there any automated moderating process like this morning? There had been a spam wave with massive amounts of mentions. This could be detected automatically. Do we have tools to work in that direction? So we have many experiments in progress towards doing that. Mjolnir has a few automated moderating processes right now, and we are in the process of expanding them, but we're also playing with many other mechanisms. One thing I didn't mention is that all the things we're doing towards moving towards decentralized moderation also are good ways to plug in bots that could implement any kind of policy you want. Again, Mjolnir comes in with a number of policies, but the idea is to make it easy for people to implement their own. Greg, in the previous talk, mentioned that communities could defend each other, thanks to Federation, I guess. So can you expand a little on how it works today? And you covered a bit that in your talk, but what are the limitations of that? Do we have a way to finally discriminate, for example, if I'm banning somebody because of spam, or if I'm banning them because of conduct violations? So I'm afraid I could not hear everything you said because of my sound issues, so I'm going to answer the part I understood. If you want to ask something else, please write it down. So there are already mechanisms largely implemented by Mjolnir to share some policies between servers, for instance. If, say, evil.com is a malicious home server and it's well known to be used by spammers, there are mechanisms so that you can post information in a room and have Mjolnir on your home server subscribe to that room to be able to download this information. And there is no centralized policy for this. So your Mjolnir can subscribe to many rooms. You can post to many rooms. So it's a form already of federated defense of distributed trust. And basically we want to add more things like this and make it more fine-grained. I don't quite understand the context of the question that was asked. So there is no good mechanism that I know of right now to discriminate between... No, sorry, that's actually false. You can actually discriminate between COC violations, spam, etc. In terms of blocklists, you can have different blocklists, one for COC, one for terms of service, etc. But not all the tools will be necessarily able to understand that. And when you just ban someone, there is no magic thing that knows why you have banned someone. So I guess to follow up on that question, various communities should probably have at least two lists, one for spam bans and one for COC violations because they have different COC. And it should be fairly safe to subscribe to other communities spam blocklists. We are running a bit low on time. We have 15 seconds left. So I suggest that people move to the All Things With Moderation Room to ask more questions and we'll move on to the next talk. Thank you very much. Thank you.
So you want to moderate your Matrix room, or perhaps your homeserver? You're in the right place. In this presentation, we'll talk about both the core concepts of federated moderation in Matrix and the main moderation tools. Expect to hear about power levels, redactions, quarantine, abuse reports, Mjölnir and plans for the near future!
10.5446/56893 (DOI)
you Professor Hello and welcome to this talk, another matrix talk about the next generation of matrix interfaces. I will or half shot you might see me around I tend to work on the bridgey side of things and today we're working on a new thing interfaces relating to bridges and integrations. So this presentation is going to cover how bridge interfaces work today and how integrations work in such a way that users can interact with them so bot commands or other sort of forms of clicky button type stuff. How are we going to try and change things? Preface this talk is very much about experimenting and trying something new. This is an officially sanctioned matrix thing we're going to do tomorrow but this is certainly something that I've been experimenting with the last few weeks and I'm hoping this will change the way we do integrations going forwards. Now finally a little bit on what's new in bridge land and just a little bit of news to catch up on really. So first off let's talk about bridges. I won't get too much detail because I'm pretty certain most of the audits have a vague idea of bridges or overly building bridges usually one or two camps. Closely don't know bridges basically are a gluing method mechanism for connecting remote services like telegram or RFC into matrix. Typically they tend to exist on next-door home server a bit like this so you see here we've got our clients we've got our home servers and we've got application services. The application services is where the bridge is you lives it's connected you to at least one home server so it sort of works in a meshy style system. See here this is our example network of a bunch of different siloed services you've got your telegram, RFC, Slack and Gitter and so on and they all sort of exist separate from each other. Some of them might connect to each other there's not a general meshing system and of course this is what matrix provides so everyone's fairly aware that at this point matrix is the glue between them. I mean if you've come to foster and through RFC then you already notice because you're talking through RFC to matrix or Slack perhaps but yeah generally speaking this is this is our bread and butter of how we do stuff. So this is bridges across section example of how we build a bridge today how we do build bridges, an example of a Twilio bridge this is actually one I use in my previous talk at Comcom where I built a bridge about 100 lines of code to prove that you could basically build an SMS bridge with very little experience and you know just an hour of your time really. In this case I'm just showing off that we've got the client on the far end which is 10-bit users used to interface with the home server and the bridge and so on and then the bridge sits in the middle there talking between home server and the remote API. What's interesting here basically is that the user never directly interfaces through a bridge there's always a home server in the way and sort of contextualize this is some of the struggles that we're going to see later with why bridges sometimes are very difficult to control. Now we can talk a bit about interfaces. So interfaces in what we're using here is to mean how do users interact with services that aren't other humans I guess to put it plainly. If you have a bot in a chat or a bridge or some other service where the user needs to send a command or do a thing how do they do it today. Now they do today it looks a bit like this we have commands basically I probably build up the majority of our communication team these services is all through command line interfaces or whatever you prefer to call them. They've been with us for a very long time I think for instance the old days of matrix until mostly up till this day we still use commands to control bridges. There's a few cases we might have special metaphors like you can join you don't have to send apart a join a IOC network you can use aliases to join them but by and large you would do a thing particularly complicated like set your nixer password or if you want to do something like join a unbridged room or something particularly clever that will have to be done through commands which kind of sucks because command interfaces are not great. It's pretty painful honestly to use these things it's going through a bit more you can see that they're pretty intuitive for new users who have never used a command interface before. Most people today that are joining element are probably not used to using sort of TUI based stuff especially the cam3 one the mobile apps or you know how looking through Facebook or something they tend to be doing using some clicky button stuff or invites or QR codes and that's perfectly fine we want to support these users much more support the more hardcore users out there. The problem we have is as soon as you want to escape the bubble of the simple matrix communication chat or the room directory jumping into rooms you can get more complicated than that and you actually want to start using the more powerful features you'll suddenly hit with this wall of commands and I don't think it's what people want to see. Matrix clients are built by graphic interfaces you know a lot of our clients are quite clever and they look nice and it's great but then some of these issues use integration so it drops down to the floor of now you're just typing commands typing and help commands looking for wiki pages and that stuff just it ruins the experience I think to some degree. We shouldn't really expect matrix users to be typing bot commands this isn't to say that we should kill them off tomorrow this is to say there should not be the default this shouldn't be the way we're expecting users to use our stuff so how do we fix this? We could define a matrix spec to cover the things that bot commands cover today so we could add extra commands to do things like rolling a dice or we could do we could try and cover every possible use case for the very more extreme use cases out there but typically this doesn't tend to work because it increases the bloat of the spec so every little feature you want to add to maybe do your small little game on the side means we have to add to the spec which really means the client has to go and implement the feature in some form and their stuff this might be the best for open standards and ensuring that you know everybody's wrong on the same industry potentially but it has the effects that most clients probably into implementing it you'll have a sort of split grain between maybe element implementing this feature and then a bunch of clients not implementing it so it doesn't necessarily solve the problem for the small use cases certainly for the bigger things out there we should try to ensure a spec but for the small little extra features it might not be the super way to do it we should try and we could try and define a common language to define all the extra functionality so some sort of I don't know XML based language where you could define your forms and things and buttons and some sort of scripting language where you could just interpret the command and then send it back but we have that already this is basically widgets matrix widgets so people don't know which is actually are because they actually only end up in that so far not in most clients don't implement them but this is hopefully going to change at some point but widgets in the nutshell are HTML interfaces or HTML pages which you can insert into a matrix room they're completely sandboxed from the matrix of clients so they have an API that can talk through to a client but they don't find a way to have access to all the all the databases and bits and bulbs of the main client they're a separate entity and the idea is basically you can build your interfaces or hook in your existing website or something like that into your interface and does how it sit alongside your matrix room so if you had a collaborative document you want it on the side of your chat you can input that as a widget and we do that sometimes a YouTube video or something or how interface to control your matrix bot or something the other nice thing is it has a capacity negotiation system so if you want the ability to limit what the system can do is they can't read your events or it can't read your account data by default which is a very limited view to your account but if you want to grant find range control to your account that's a thing they support and actually it's a super powerful feature when you consider that you don't necessarily want to have full access to your account architecturally they look a bit like this you have a your client again bits and a lot of diagram we have a widget that sits sort of on top of his own widget API in an example we'll see a bit later there's a rest API which you you can have a rest API team or widget and the bridge and the widget also has access to some of the data you'd have on your matrix account which it grants through the widget API you can sort of see here it's not a direct connection to my myself which it can't directly start calling client server API can own stuff at least not out of the box you could potentially provide a access token to your widget but we try to make sure that all the calls to the home so go through the client to avoid sort of any potential dangerous actions but yeah typically this is how we would build a thing so next one I talk about is how do we invite users to a Dm over a bridge and this is a common problem we've got with bridges at the moment is bridges can't yet provide a mechanism to invite users who have never spoken on matrix we're perfectly so able to invite users who are already chatting on matrix so I can always invite someone from my RSE conversation to a chat at Dm that's fine but the user has never used the bridge before how does matrix show a UI for users which have never set up a profile never really interacted you'd have to have some sort of directory system this can be achieved with other mechanisms but one way it's really easy to achieve is to simply a widget in your room something where you can just type in the name the bridge will search for them and in return they was all and this is actually what we're going to demo just now so if I switch over to my lovely slack demo here you can see here that I have got a matrix cancels up on element this has got a few things on it hookshot bridge, sherry and bridge and slack bridge you can see here we've already got a bunch of things in here crucially we have this bridge control widget I just want to quickly show off the widget itself not the actual widget not the widgets UI but actually the content of the widget you can see here they've got a zoom in a bit that's actually visible you've got a UK harsh or bridge room subscription Spain which basically says hello this widget provides a feature for your bridge in this case it provides an invite widget feature and I'm going to see how that actually looks so I go to start chat and I'm a client look for people you see you've got the usual stuff but interestingly you've got this bridges section with a hookshot settings room which corresponds to this guy here but also a slack bridge room which corresponds to our slack bridge so I click through to this and I actually granted permissions to again this is our permission system granting permissions to verify that my user ID we can hit here and then suddenly presented with this lovely dialogue box I will admit my design credentials and not as good as my bridging and stuff but generally speaking this is a simple simple UI and houses such users on slack so what this is actually doing is rendering a very element itself is rendering a very generic window which the bridge them fills in with some UI so this bit here is all running on top of the bridge and this outer bit is element so if I don't search for foster you can see here there's this user here could will foster never spoke to him for my account but now if I click through now and click this guy and then type here and go to here I've now got invited I've now sorted the end of this person and then if I don't send a message saying hello and hopefully this person's actually respond to me we should find out here we are upset hello to my back and if I don't say hello back just to prove the fact that this is definitely a live demo you can see here that's a message back so proof here is you basically can then look for people through this lovely system and it also works I mean I could also prove that it's fine to me myself so that's cool if I did well does this because in our particular demo for the slack bridge it actually searches both your name and your display and it's it can find people through multiple directory keys if you were and so that's pretty cool so that's our demo for how to do widgets and a very basic demo how widgets actually work in a bridge context of course you can do way more than that we can have rich settings rooms and things like that or we could have images appearing in the rooms or one of the things I've always loved to have the idea of it's a side winding room widget on one side of the room on the other side so you could see news or some sort of like extra data that your room doesn't be able to provide on its own but this is like sort of the idea of where you could extend this further if you wanted to write multiple sort of invite types so let's talk about the forex which you've seen me leak slightly just now so one of things we can still that still are missing is obviously that not all clients support widgets so we're trying to think about how we can do full backs for this one idea is the idea that since the widgets just calling rest API's we just expose those rest API's and authentication mechanism and so the client itself could call these and this would be good for the case where you are a client that couldn't doesn't we want to bundle a whole web rendering engine but would love to be a support add under the RSC bridges the widget API and so what you could do is also inspect this open API spec and then potentially build out an interface on top of that and that's one idea exploring some sort of limited form of also form building we want to be very careful there and make sure that you still have the ability to do nuanced things and more than just a common basic form we don't really want to start specking a whole forms in matrix thing given that's a solved problem elsewhere and of course the one thing we want to ensure we still have is bot commands as a fullback and it's in a way that in my own Linux you'll have you know fullback commands so pretty much if you do an UI same on the GUI the same reason we want to make sure the matrix still has that fullback for people who don't want to present a UI which is still a very useful thing to have but otherwise yeah we're hoping this will be a way in which we can sort of build much richer integrations into existing clients so next steps how do we take this further so we've got some MSCs on the way as you would expect with most of these things there are two MSCs currently out there and how we can expose user IDs from clients from widgets to clients so you can start in the invite or start a DM I've got one of them 3662 which is very particularly about user IDs in texture how you'd it convey intense to the client about what you plan to do with them there's already an existing MSC 2931 which handles room IDs and events too so it might be one of those will probably end up winning the feature end up in one of these one of these ways and there's also the widget spec itself which is in the draft also massive huge congrats to Travis for writing a massive massive document about how you do widgets and it's like pretty complete but just not landed in the spec yet for what it's worth most of the widget stuff is fairly stable at this point in terms of actual usage a lot of widgets already in use by element in various places so we're pretty confident of the actual technology itself we also need to land the bridge components for this demo with what I demoed there was a bunch of stuff that's in branches that most of it's in a sort of draft state and the next thing I'd like to do is I'll take this further and have it land as part of the core makeshift source bridge SDK some work and start using it and I'd also like to land the other web stuff there let's just saw there again most of it was branches but perfectly reasonable one of things I like to do them web implementation in particular is have more settings view so you can see which bridges you subscribe to and also how to sort of control which bits and you are they sharpen we like to develop the full back solution talked about earlier so it's it's old clients data left behind a crucial part of this whole thing is that we don't lose clients because only one thing wasn't trying to build a better you why is you end up killing off a whole bunch of your existing user base by end up having no one who can use your stuff so the only way we'd ever a progressive as widget stuff is we kept access to all the existing commands and made it still as easy use as possible so that would be one of the challenges we'd like to look at first and of course if all of this stuff happens we'd like to make sure that the IC bridge gains it and then the slack bridge obviously improves its support and so forth we'd like to make sure eventually all the bridges become as ergonomic as this can be so yeah finally fine thing we'll talk about is basically what's been happening in bridge land already so there's a few things out there both in what we call out ticklers and to end to bridge encryption is now formally landed as inside synapse and the botus decay the setup stuff is very very close there's a few mergers to happen for it's like in the stable but I've played a bit and it's absolutely brilliant and so finally we've solved this problem where bridges can now participate in encrypted rooms without the overhead of having to call a sync request to get things like device messages down and it's a little bit technical but while the existing forms have had with end to bridge encryption is that device to pull down information about encryption you had to hit sync APIs on the server which was a polling action and the polling action across different users meant that it was a bit slow and all the bridge bot had to be in lots of rooms which you didn't need to be in and the new stuff is all happens down the transaction API which is a push technology which means that the hope so does sends data to the app service which is meant to be a bit better for the whole performance footprint so that's looking pretty good the botus decay itself has landed all the components inside its beta branch and I've been using it already it's actually powered by the message rusts SDK under the hood which is pretty cool in itself but yeah one of things to look out for is the fact that maybe this year most of our bridges will gain into bridge encryption by default which is just really cool the fact that we don't have to worry about whether a room's encrypted or not it's a bridge to it and of course one the downsides to end to bridge encryption is that and it's not fully encrypted but we're hoping at some point that encryption becomes property to weave these various networks if they all do not technology so we'll see otherwise that's my talk I'm hoping there'll be some questions but basically that covers everything I plan to talk about thank you
Matrix has already got the solid framework of many bridge and bot implementations, but has always been missing that polish to make the bridges more accessible. In this talk, Will will explain how we're going to build our bridges with an all-new interactive interface and replacing old bot command interfaces with widget based interfaces.
10.5446/56896 (DOI)
Hello, Faslem. Welcome to my quick talk about the Matrix Rust SDK. The Matrix Rust SDK is the Matrix Client Library. This talk will be in two parts. I'm first going to talk about the Rust SDK, how the SDK came to be, what's designed, a case study, how it's used in Android land. And after that, we're switching to Julian's father, who will talk about Fructal, a GTK Matrix Client written in Rust, which heavily uses the Rust SDK. Before I start, a little bit about me. I'm Damir. I'm rural Croatia, and since there's nothing to do in rural Croatia, I don't have some alcoholic or thanks to the advances in technology and work. So I thought to myself, why not do both? Before we jump directly to the Rust SDK, let's first talk about history a bit. How did we end up with the Rust SDK after all? Thanks for starting when I was shopping around for a bit to chat with close friends in a secure manner. I was already using BeatShot for ISC and XMPP, and I didn't want to change my work. The classic XKCD, don't change our man's workflow, if you remember that one. The one could use off the record messaging with XMPP, but that turned out to be a bit complex for non-tech friends. For those who don't know what BeatShot is, it's a text user interface ISC Client written in C and some 18 years old. It's written in this modular way that cross essentially a chat-specific text user interface framework, and everything else is a plugin. The ISC support is a plugin as well. You can expand the client to support any type of protocol. Now there are a bunch of assumptions that the text user interface has that are ISC-specific, so you might sooner or later run into trouble with some different protocols. Here's what we are dealing with. It's a single-fet it program. It's event loop-based and call-back-based. Extensions can be written in a bunch of languages. It can't spawn threads unless you're writing your extension in C. It can open sockets for you. It's important it's opening sockets in a separate thread to not block the UI thread. I have to mention that there already was support from Matrix implemented as a new extension, though the end-to-end encryption support became stale. It didn't support the latest version and has had various other problems. I started writing a new extension in Python, and I was a bit annoyed that I couldn't use at the time existing Python SDK. This nudged me towards making sure that whatever I write for Richard Matrix should be reusable for other projects and plans. Therefore, other featured Matrix came new, a modular Python client lab with a no-year API, a very push-pull HTTP request in and out of state machine and an async API on top of that. Sadly, there are still problems with this, at least for the Richard use case. End-to-end encryption support in Matrix requires sending out multiple requests in order. That quickly leads to callback hell if you're not using async arrays like we can't use async arrays and reach it. Another problem is that there's no multiple threading, and expensive cryptographic operations might block the UI. If you're sending an encrypted message, the encryption keys need to be sent out to a bunch of devices. You need to encrypt for each device separately, and this might take some time, depending on the size of the room. And in that meantime, the UI will block. So relatively early on, I started to experiment with Rust bindings for the Richard plugin API and a accompanying Rust Matrix client library. This leads us now to the Rust SDK. It also has modular structure. These names are all trade names except the state cell. This one should become in the future a separate trade as well. You could pick any of these trades and integrate it into an existing client library, possibly in another language, or you could pick the whole Matrix SDK trade and blend it into another language. I should note that we are using Vroomer, so input validation layer. It passes all the responses into high level types, so we don't need to deal with JSON directly. So the Matrix SDK trade here should be the most interesting one for library developers. We are developing a Matrix client library, which doesn't yet support end-to-end encryption. Check out the crypto trade. As a simple push-call API, you pull HTTP requests out of the lib and push HTTP responses into the lib. You don't need to understand the complicated logic of how these responses need to be handled or passed or which requests need to be sent out when and how to verify signatures, signed objects, all of that is handled transparently in the crypto trade. As for the Matrix SDK trade, if you remember the code sample from Neo, we mostly retained a relatively simple Python API. Most things should be quite straightforward and end-to-end encryption support should be completely transparent. Messages get encrypted automatically and possible decrypted as well. Now this is really similar to Neo. We are using more types obviously because we are in Rustland, but all the flow is kept really similar. If you are familiar with Neo, this should be an easy upgrade there. But also if you are not familiar with Neo, it should be relatively simple to write things. Of course, if you are interested in writing bots, this might be a little bit low-level for you. If you are interested in a higher-level bot API on top of the Rust SDK trade, it would be the way to go. A bit more about the data flow in these trades. I already mentioned briefly that we have this proofable design. At the top here lies the home server, which has data. The Matrix SDK trade does not sync HTTP requests, which as for data, this request will contain an initial state. Some sequent calls will contain only the delta changes from the previous state. We push the response into the Matrix SDK base trade and into the Matrix SDK crypto trade. Those two trades contain state machines that consume the HTTP responses, update the internal state and finally generate some outgoing requests. Those requests are then important to keep the encryption going or to keep your state correct. You need to track other people devices periodically, upload new cryptographic keys to the server. Basically everything that needs to be sent to the server to have working end-to-end encryption support will be pushed out of the state machine as a group request that needs to be sent to the server. This does mean that all our networking IOS is having in the Matrix SDK trade, while base and crypto trades behave mostly as pure state machines. Some persistence is there, like data we are possessing encryption keys and to a state and so IOS is not pulled out of the trades but is abstracted away as a trade. Whereas the Matrix SDK used, other dimensions, fractal, this is a high-level, probably flagship plan that is using the SDK. It supports encryption, it supports QR code verification, emoji verification. Every advanced feature should be supported by fractal. Richard Matrix is my rewrite of Richard Matrix. It has become a bit stale due to lack of time but eventually it should replace Richard Matrix. We have the Matrix SDK app service crate. This one is built on top of the SDK and it's used to build ridges and privilege bots for various use cases. Some things only use the crypto crate. Element Android, for example, uses the Matrix SDK crypto crate and the Matrix Bust SDK as well uses the crypto crate. Element Android is written in Kotlin and Matrix Spot SDK is written in JavaScript and there are bindings from Rust to Kotlin and JavaScript. Let's take a closer look into how the Rust SDK is used in Element Android. Do note that this isn't yet in production but in a late beta phase. It only uses the crypto crate and we will take a look at how this is integrated. Firstly, we are using Mozilla's UniFFI crate to mostly generate Kotlin bindings. This is a bit different if you are familiar with Buzz and Buncheon and PiO3. It's a bit different from those crates. You have a separate file where you define your interface and then it generates Kotlin bindings or JavaScript bindings or Python and Ruby as well bindings from the separate definition. Our mind structure here is called the All Machine. This is the state machine that handles the cryptography and let's take a look at the code snippet. This is the first method that we need to bind. This is skipping the All Machine initialization but creating all machine is quite simple. We just pass a user ID and device ID and if you remember our data flow graph, this method is the push method. It pushes changes we receive from the server into the state machine. You don't need to push the whole sync response into the state machine. You just push the smaller parts into it. Those are all directly tied to the crypto support. The two device messages will contain room keys, the device changes are important to track which devices a user has and the key counts are important so we know which keys we need to replenish from the server. The two device events here will then be decrypted and returned by the All Machine and you can then consume those events separately. Next up is a pull method. This pulls all the HTTP requests out that are either here because we first initialize the All Machine or the sync changes produced. There are a bunch of request types we will get from this. I have not listed them all. It's best to send them all out in parallel and you need to notify the All Machine after each request is sent out. The response needs to be passed back to the All Machine so it knows you sent out this request. One of the requests fails to be sent out which can happen especially on mobile connections. SDK will keep answering you with the request until you tell that this has been sent out. This is a common pitfall with various clients. People tend to forget to try sending out a request and that request might have sent the encryption keys to other people and suddenly people can encrypt the messages. As a backup plan, if you fail to retrieve a request sending, the All Machine will have your back and let you know you need to send this out. This isn't done for all request types. Some of them are of memorial after all but the most important ones for example the Groom keys, the decryption keys will be kept in memory until you send them out. That's it. Now you can decrypt the messages. This doesn't happen automatically. We have a separate method for this because we don't know what your user is looking at and you want to prioritize decrypting the messages that your user is looking at. So you can do that quite easily. Now for the sending side, we need to do two things. We need to establish one-to-one secure channels between each device we're going to talk to and we need to send each of these devices a room key encrypted using the secure channel. So you first ask the All Machine, hey, who doesn't have this secure session? And it returns a request and after you pass the response to the All Machine, mostly everyone will have the secure channel established. After that, we call the share room key method and this will encrypt the room key for this given room and for those members and this will generate either one request or a bunch of requests depending how large the room is and you can send those out in parallel as well. And if you fail to send some of these, the All Machine will tell you, hey, you need to send this out the next time you call a share room key method. And that's it. Now you can encrypt messages as well. Well, almost. I skipped one method where you need to tell the SDK which users are part of encrypted rooms so it can track devices users have. But even so, I think this is quite a simple API compared to what you need to do if you implement this from scratch. Let's take a quick look at the performance of Element Android using the Matrix SDK Crypto trade, if until now nobody has been convinced to use the Rust SDK Crypto trade, perhaps this will convince them. This has been already explained to reiterate and rate the client to send an encrypted message if they just need to share what we call a room key with every device in the room. To do this, a room key needs to be encrypted for each device separately. And this graph shows how long clients take to share this room key. Now this isn't an apples to apples comparison since we use different devices, but the slowest device should definitely be the Android mobile phone. And this is the mobile phone, Android mobile phone is the one that's just using the Rust SDK. So here the corroded and rates on the graph are Element Android converted to use the Crypto trade of the Rust SDK. So we managed to lower the time required for minutes for seconds. Now this isn't all because it's Rust or because we're using multi-threading. There are certainly problems in Element Android, but I don't think anybody uses multi-threading to encrypt room keys. And therefore I don't think anybody comes close to the performance of the Rust SDK. And sharp-eyed people might notice that there are two versions of the corroded Android code base. One is neatly called Wothers Emmets. And we'll get to that next. What does this mean? So a bit of a discretion here and what does Wothers Emmets actually? Wothers Emmets is the craving name of amphibians. Why amphibians? Well, amphibians have some amazing self-healing properties. Like a gecko, you catch the tail of a gecko, the gecko will discard the tail and it will regrow. And because the double-red should protocol, which is used in Matrix, has amazing self-healing properties as well. Implementations were usually named as some amphibian. The original implementation for TechSecure, the predecessor of Signal, was called Exolotl. The Matrix version was called all or is called all. And Wothers Emmets is a pure Rust implementation of the Matrix variant of the double-red for Tupel. So why would you ever rewrite your cryptographic library? Why would you ever rewrite anything in Rust? Yes, LeBolm was stable and widely used. But let's take a look at the API of LeBolm and see why we might not like it that much. This snippet creates an ALMA account. This is basically the identity of your device. Each one of your devices will have its own ALMA account. So this involves a couple of steps. Let's ask LeBolm how many bytes we need to allocate. Ask LeBolm how many bytes of randomness we need to generate, allocate the memory and generate the randomness, and then call the initialization method. So this is a five-step process. And basically, anything you do will have the same layout here. And this shows us a couple of problems with LeBolm. It has a vitri-type API. You pass in a bunch of opaque buffers. If you reuse your buffers, you might get a slightly incorrect result. That's hard to notice. So you have your input buffer, you have your randomness buffer, you have your output buffer, and all of those could be interchanged without anybody noticing. This has actually happened to us where we got a slightly incorrect base 64 encoded message. So the input buffer was reused as the output buffer. So the base 64 encoder was encoding an already encoded thing. So it was slightly incorrect and hard to reproduce a bug in another library. So all of that happened because buffers are all opaque and you don't know what you're passing in. And the API is quite complicated. You need to make multiple calls to generate randomness, allocate things, and every language binding of the library will need to again allocate things and generate randomness separately. And the next one here is scratch-dangerous. As you find allocation size might return negative value. Every time you want to do something, you ask the library how many bytes you need to allocate. And there are a bunch of such methods. Some of them might return on error, some of them can't. If you forget to handle this error, which will be returned as a negative value, you might try to allocate all of the bytes because the negative value will be truncated to a very large value. And you possibly just edit denial of service into vulnerability into your client. Furthermore, the underlying libraries that are used for the cryptographic primitives crypt255.9, ed255.9, and encryption standard run that create either. So these are just some problems of LeBone that it shows already that people need to work a lot of LeBone to get it to a modern state. So let's take a look at what does MADS API that creates an account. These snippets are three different languages and all of them create an account in the same way. You just call the constructor and everything gets allocated and randomness gets generated. This isn't really a criticism of C or LeBone. At the time, it was probably the best idea to leave their allocations to various platforms. But nowadays, the tooling will take care to use the correct implementation on the VASM and Python and Rust. And this looks much simpler, right? Bindings become much easier and can have the same high-level API as the Rust version. The bindings also don't allocate or generate randomness anymore. So here's what the MADS offers. MADS high-level API. Thanks to Rust, it's easily bound into different languages. Using VASM, VIN, and BioFree, you name your language binding straight. We are using modern implementations of cryptographic primitives from Rust, crypto, and from the Dalek traits, so X25519 Dalek and DD25519 Dalek. And all of those have been audited. We are also going to audit for the Zemmats. And of course, we provide an upgrade app from LeBone. So LeBone has a way to persist all the objected traits. And for the Zemmats, you can read the data format LeBone uses. We are still not sure what data format for the Zemmats will use. Maybe a server-based or maybe it will use the binary data format LeBone uses. And the API is quite similar to what different on-bindings were exposing, so it shouldn't take too long to switch to all those Zemmats. And this concludes the brief overview into the Rust SDK and what else around it is happening in. The rest of the talk will take a deeper look into Fractal, how it works, and how it utilizes the Rust SDK. And this part will be explained by Julian Faber as mentioned, and off to Julian. Bye. Hi, all. I'm Julian. I'm going to talk about Fractal today and the journey of writing it. So Fractal is a native matrix client built for the GNOME desktop. So it follows the design principle laid out by the GNOME design team. It works closely with them. And therefore, it has a huge focus on usability as well as on user experience. So before we dive into the project, let's have a quick look at the project's history. So it was startled all the way back in 2017 by Daniel Brasio Moreno, who wanted to build a GTK client for Rust. And therefore, he forked Rumor GTK, which now is called Festa I Am, written by Jonas Blathe, even though it did not see much development since the fork, actually, or in the last couple of years. But Fractal, on the other hand, got huge attention over the last years. We even participated multiple times in GSOC and Outreachy. And I have been one of the GSOC students and later than also as Manchur, which was quite fun. And yeah, now five years. So it has been a long time. But now what we are going to do is Fractal complete. I can answer that quickly. No, it's not. We're actually doing something interesting. Back in February, so a year ago, obviously a year ago, we decided to do a rewrite. A rewrite? Yes, a rewrite. But this time with N2N encryption support. Fractal, even though we wanted it from the very beginning, didn't ever have encryption support because Fractal started out as a pre-time project. And it has been most of the time run by or developed by volunteers. So it was really hard to get something like encryption support built in your pre-time, actually. So we now got around. We keep moving it. And yeah, but this time it's going to be different. We did it from the beginning and now I'm going to tell you a couple of reasons why we decided to do the rewrite. So first of all, probably new. Already it's made with Rust SDK. It's pretty new. It's another version. But it's very usable and we also spent a lot of time to contribute it and provide feedback to it so we could use it. And we managed to build a client on top of it, which is not finished yet, but I will show it later when we start. The next reason is GTK4 got released a year ago, which is the UI toolkit we use to build Fractal. Initially we used GTK3 because that was the most recent version we could use. And now we have GTK4, which comes with a lot of improvements, as well as the bindings to Rust because we are talking about Rust Word. GTK is written in C. It got huge improvement, the so-called bindings, language bindings, which connect GTK4 into the Rust Word. And all the people, a shout out to them, they did an awesome job improving it and working on it the last few years, probably for quite some time. And now it's changed completely. Therefore our code base was pretty much outdated because we could use all the new fancy features of GTK4 and new bindings for it. Last but not least, LibAdvaita was released. Quite recently, the stable release was released quite recently, like two months ago or three months ago. And that's library, the successor of LibHandy. Maybe you heard of it already. It's library to build responsive GTK apps, adaptive GTK apps. So Fractal will run also on small screen sizes, for example, mobile phones. And actually LibAdvaita is much more than just a library to build adaptive apps, but also a building block for every modern application. So all of that sounds like it would reason, right? And yes, you are right. It is. So after first trying to integrate the major Rust SDK into mostly by Hando, who also was a GSOC student, into the old Fractal code base, we realized quickly that it's not going to work because so much things changed. So then in February, as I am a couple of times already said, we launched the initiative Fractal Next. So now it's all about rewriting the Fractal in native metrics time. So at this point, I need to mention that this effort would not be possible in my free time or that matter for anybody, probably, to build a project in that time span with that many hours working on it if it was not financed by somebody. We are lucky. I applied last year for funding from an internet and we want to grant as part of the private and trust enhancing technologies fund to implement and to enter encryption in Fractal. So the initial plan was always to use the matrix SDK and build on top of that, but the idea was not to rewrite Fractal initially. But after a couple of months, we pretty soon realized that it does not make sense to spend more time on the current code base or back then the old code base. Therefore, we asked for an extension and they were happily happy to support us also by, but they rewrite the accomplished out. And an internet gets most of the money from the EU Commission pretty much directly, so it's sponsored by the EU Commission, you can say. Additionally, I want to thank Kevin, Tobias, Kajar, and Alejandro for having a lot during the last few months, Kajar and Alejandro mostly during the summer months, Kaj the other time, Tobias with design support during the entire project and it's still not finished, just to mention that again. So now let's get a little bit more technical and see what's happening inside Fractal. So as you probably heard in the previous part of this talk, the Rust SDK is multi-freted and uses as an asynchronous framework Tokyo, not early, but mainly that one. On the other side, Fractal uses GTK to draw widgets and UI, which requires it to be single fretted. Obviously, this does not mean that Fractal can use multiple frets and we also use multiple frets. So the idea we developed is that we have a main frett called UI frett that handles all the GTK things. And on the other side, we have Tokyo running tasks needed for the Rust SDK, like making requests and everything the Rust SDK does for us. And to communicate between those two, we use different approaches, or actually two different approaches. Mostly, we use channels and features, which are if you're familiar with Rust or in general any modern language, you probably know them already, they allow you, the ways or channels are ways to communicate between frets or tasks, which we do in the case of Fractal. Furthermore, we need to wrap all the Rust trucks and matrix events, matrix trucks, the matrix Rust trucks into the object so that we can have a data model which can then be bound to the UI, which we have already. Additionally, the matrix Rust SDK provides mostly an asynchronous API, which does not mean that data is not immediately available, but we can rely on that because it may be on a remote server and the matrix Rust SDK still needs to request that information. Therefore, we do some local caching inside the G object, which is pretty much an object holding data, which then on the other hand provides us with a synchronous access to the data so that the UI feels snappy and reactive. So, matrix, one of the most important building blocks as the sync request, and also in Rust SDK, you need to prepare the sync to synchronize the state between the home server and your local machine. Most of that is handled actually by the Rust SDK, which is pretty nice, but there are different ways to obtain the sync response from the SDK. One is to manually make the sync request and do the SDK handle anything, which obviously you probably don't want because then you have to handle everything yourself, like encryption, key sharing and that stuff. Then there is another version where you get a stream, so it's a stream of sync responses and you can read from the stream and the matrix SDK will push new responses it requests from the home server to the stream. Maybe we are using that version and move it to a channel to be able to update our G objects, so our local cache of the state of your client or for example, a root stage or something like that. The third option, which is quite new and we would like to switch to that sooner or later, but we did not get around is that you can register event handlers, so you can say something like or write something like I want to get notified when a certain notification happens or certain event happens and then a closure or callback is called automatically for you, which you registered before, which is really nice. But when we started, there was none, it was not that type of event handler, so we couldn't use it. We didn't get around reworking our code yet. So now let's have a quick look at the practical itself. After starting practical, you see a quick loading spinner, it updates the state, executes the sync, obviously the loading time depends on when it was last synced, but I did it quite recently, so it was really fast this time. The UI is pretty similar to previous versions of practical, except that it got a phase lift and it was updated to the new style guides for now. So let's quickly look at what we see here. So on the left, we have a list of rooms in different categories and we even can move them between categories like what are we going to move, I don't know, GK room for example, we can left click Marker's favorite, it disappears, now it's up there. And if we want, we can open it and then you choose the room history, it even loads later events. This is already the case. What else can we do? We can move it back to favorites, unfavorited to normal rooms, then we can explore new rooms, join them, for example, let's join the practical room. Oh, actually, we are already in the practical room, so we can join, but we can immediately view it from here, which is pretty nice. It takes some time to load. Okay, here we go. Some nice hearts, me reacting to it, that's a really nice feature implemented by Kevin quite easily and I'm excited about it because practically not half it, although the previous version I can not because that's not set with this account, but I can add one like a heart and remove it again. And everything, yeah, it feels pretty solid from this, but we're still missing features, we even can send messages. Live demo, my name is Jesse Messi in this case. So what else do we have? We have basic room details, this account cannot modify them, modify properties of the room because it's just a test account. As well we have a list of members with admin rights, pretty much similar to the previous practical version. We can invite people, I know, let's invite Julian, myself. Oh, actually, I forgot. Users that are already in the room, I exclude from the search. So I can't invite myself. Let's include fractal test count. Oh, Johan fake, we can invite him. Let's invite him. Okay, here we go. Jeff fake was invited to this room, what else can we do? Something really interesting and fancy, I can implement during his GSOC as a multi-account support. Like, I could add a second account here, but let's not do that right now. And use it with multiple accounts. Then we have device list. Maybe you noticed there is a small shield, which means that fractal access currently verified. Here below we have a session with point not verified. We can remove it. And obviously it asks for authentication. Hello, what else, what else? We can join new rooms. Create new rooms, I meant to say. Inspect the source. Oh, nice, we even have gift support. This was not planned, actually. We can see a dancing bear. So if you want to help, then please do come talk to us on matrix or have a look at our repository and we will happily help you and direct you to something you can do. And yeah, thanks for your attention. And again, I'm Julian. And if you have some questions, then feel free to ask them. All right, thank you both for your talks. I've got Julian and Paulie are here with me for live Q&A. And we're going to start with the first question for Julian on fractal. So NLNet was provided funding for fractal and that was super important to get the core of fractal next to be written. But did you get an interest from potential paying customers in fractal next and are you open for potential sponsored work on it? So yeah, actually, we did not yet get any interest from any other site and NLNet to sponsor our work, but maybe in future, I would definitely be open to do some work on fractal next with the company sponsoring it. At this point, I want to also mention that the NLNet grant is pretty much over or it's actually already over. And now I'm working on it on my free time. So it also would be really nice to have a company sponsor to work to start finish fractal next and get the release out there. Right, so you've heard it guys. If you want to see some new cool stuff in fractal and you want to sponsor it, just contact Julian. One for Paulie now. So is the matrix rest SDK entirely spec compliance or if not, what are the outstanding missing features? So spec compliance is generally handled by Roma, who handles all the passing and from HTTP responses into requests, mainly things missing things are knocking spaces, a bunch of smaller things in the capital area. But yeah, we are targeting it all of those. It's always cross to changes in Roma and then the rest of the system. Right. And do you have a rough estimation on how we should expect the rest of the state to be fully spec compliant? We're always playing catch up because up until now it was so. So one main throw by me, we're now getting more people to work on the rest SDK. So maybe this year we will catch up finally with all this back work. All right. Excited to, looking forward to see that improve. Back to Julian now. What is, can you rough summary of what's missing in practical makes before it can be officially released? So that's actually quite the question because it's still missing a lot. But on the other hand, we already implemented a lot of stuff we did not have in the previous version of Rackfall. So I think a major point is still the account settings. Like you can't configure your account. You can change your username or anything like that. Account registration probably also we need to add SSO. So single sign on which somebody is working on a volunteer. What else? Like if in a room history, some keys are missing to decrypt the message. We don't request them or we don't show it in the UI. And it will just show forever that the message couldn't be decrypted. And in general, a little bit of polish and probably I'm forgetting a lot. But from the code base, it's pretty easy to add stuff. But we still need to do it. Yeah, definitely. Looking forward to seeing a fractal next in the coming future. Another question for Paul, yeah. Is butter the Mac? I'm pretty sure I'm pronouncing it wrong. Using other crates for crypto or does it re-implement absolutely everything? Yeah, of course we are using other crates. I think I mentioned that in the talk that we're using mostly crates from the Rust crypto project. Those are Rust implementations. So crypto permitives, we're using the advanced and triple standard rate there. And the ED25519 and the X25519, all the implementations I used for the double register. Makes sense. Now this question, I'm not entirely sure which one of you guys is best place to answer that, maybe Julian. What features were missing from the Rust SDK that were required for fractal? So first of all, what was missing was a semi-polished high level API to interact with rooms, which was not there a year ago, and now it got way better. Also we were missing a good way to get new room events, which now we have the streaming API, which was added by Blathe, I think, which we actually don't use in fractal yet, but it's very nice. Most of the time I actually spend on the message store so that the entire room timeline is stored in the SDK, but I also wasted a lot of time on that because I tried to store all the events, but Matrix does not actually allow that because it does not give you diffs of the sync. So if the timeline changes or events get pushed into the timeline because of federation or something like that, we don't really know how that works or how we get those updates, so we can't have a precision local store. But now I have a good plan to do it, not completely persistent, but more like a cache, and that should be published soonish in a week or so. Oh, right. So it will be added. All right, now for maybe the final question, given – so that's one for Paul, yeah. Given you're the maintainer of both the Matrix, Resist, SDK and Neo, are there chances that we can see Matrix, Resist, SDK powered Neo in the coming months? Hard to tell. I really want to do this, but there's really a lack of time, and not many people are that interested in the Python thing, just various other findings. The thing obviously won't be interested in anything else currently, except that there must be a crypto-grade, we are making a switch client that won't touch anything that's Python-land, it won't generate any findings like the crypto-grade generates, various languages that will use its own. So it's really a wanted feature, but nothing really touches Python in that case. At least I am aware of it. Right, yeah. So yeah, we can talk to – see what happens, when it happens, but yeah, definitely be aware of the process, if it can happen, just now, and now, probably definitely the last question this time, what kind of bindings does the Resist SDK expose for non-resusers? All right, we just lost all. We lost them all. Yeah, we lost all Resist SDK experts, and in a timely way, because the Q&A is reaching the end, so thank you both for your great thoughts, and have a great rest of the conference. Thanks, Yutu. Thank you for having me.
Matrix is an open protocol for secure, decentralised communication - defining an end-to-end-encrypted real-time communication layer for the open Web. Historically the network has been made up of newly written native Matrix clients, or bridges to 3rd party existing chat systems (e.g. Slack, Discord, Telegram). The matrix-rust-sdk is a modular Matrix client library, meant to be a robust implementation of the protocol, and to make even the most advanced features such as E2EE easy to use. This talk will walk you through the design decisions and the tradeoffs that come with it, give you an overview of where we're at and where we're going with Web-Assembly, and finally what the future holds for the matrix-rust-sdk. Finally it will be connected to a real-life example of software using the matrix-rust-sdk: Fractal-Next.
10.5446/56897 (DOI)
suggestions postahan a ask a pist harmonici ym E reveal a rwy'r kopi dryf iy a llaper felly dyma office ac draf 나bechisseth ac eu puisy fyddwyd. Mae ydi i jeux miracles 저희 gyfnodd y gweld yr overdyddwyr ysgwrnill dyna'r악 delay ac yn defnyddioogi ar teimlo cy daíodd ymyeriad hwr o syniad deoliogol neu tydplwyr, er mae'n o Ireland y tro liwn wedi cais nodi sirobryd ar gyfer criminol, mae'n bekwyd yng ngywledd dcatwch ar y kunt lottery cyd Ogwn i'r waving sy'n biaddLoveon ma levers magnificent—fawr dychydig i dda oedd vetffol reu digon o'r collfyn ni wedi mwyflauölau tyng domainotau reboedd i gyd o turbulent a cymwyntu. wrth fy nod,cy oly paradise ar draws eru am rydyn beth yw eu storid o'r newydd gyn 내ned aíb yma 12 yn y proclwso eich gy 뿌�ad希望cau i'r hollacedeithu i amちn ei ddwylo. any single point of failure. So the whole idea is to effectively create the missing communication layer of the real time web. No single party owns your conversations, they get replicated equally between the servers participating in the conversations and so you effectively have subversive decentralisation. I cannot talk to somebody on another server without going and equally sharing ownership of my conversation with them. It's very light get where each get repository is equivalent, except here we're doing it with communication rather than code snippets. So this is a diagram of matrix. Here we have five matrix home servers, each with a couple of matrix clients hanging off them, talking HTTP and JSON over the client server API by default, and then talking HTTP and JSON between the servers as well. We then can add in application services which go and provide additional functionality to these matrix servers such as bridges to other protocols or perhaps bots. We also have identity servers which are used optionally to look up people's matrix IDs based on their phone numbers or their email addresses. What we can then do is to expand this out further and link existing communication silos, whether it's Slack, Discord, WhatsApp, Gettyri, RCRX and PP into matrix via these bridges and thus matrix effectively becomes the decentralised glue to link together the various communication silos of today. Matrix ecosystem itself hinges around a matrix specification and spec.matrix.org, a whole pile of documentation describing the HTTP and JSON APIs that connect together clients at the top, to servers at the bottom and indeed servers and application services. So we provide some reference stacks from the core team for JavaScript, iOS and Android. We also now have hydrogen as a lightweight JavaScript and SDK, very distinct from the original matrix JS SDK and also matrix Rust SDK as the next generation native SDK used for instance by the front tool next community project and hopefully more of our own projects in future. Then on the server side we provide a Python implementation in Synapse and a go-lang implementation called Dendrite and Dendrite is increasingly close to exiting beta as our next generation approach and meanwhile Synapse is getting more and more mature as time goes on. Then we have application services such as the bridges we've already mentioned and then many many community projects out there for server implementations, bridges, clients etc. Looking at matrix and numbers here are some graphs. This is the total known MXIDs ever over the last five years since we've started recording it. We've got almost 50 million total MXIDs reported back by phone home stats from Synapse's so far. About half of these are native to matrix and half of them are bridged in from other platforms. It's about 25 million total matrix native users. An interesting new graph is monthly active users which we started tracking about a year ago now via Synapse phone home stats. It's a bit of a random walk. You can see that it has been wobbling around over the last year a bit, doubling overall from 900 k in January 2021 up to about 1.8 million as of now. It varies a bit as big servers come and go and often we see a pattern where they turn off their reporting stats on us when they get to be on the given size. If you're running a big matrix server please keep the phone home stats turned on so that we can keep track of how well the protocol is doing. It helps us get funding in as well as make us feel happier about our lives. Then we have total messages per day. This is over five years going from 1 million back in 2017 up to 10 million messages a day. For context this is small relative to say discord. Discord is up at about a billion messages a day. WhatsApp I think has got 100 billion messages a day so there is still a long way for us to go but the graph is pointing upwards into the right and we think that as we improve things there is a real chance of the kind of network effect and snowballing further. The matrix continuing to expand at an even faster rate than it has been so far. So what can we do to make matrix improve faster? Originally our goal with matrix was for it to work at all and we finally got there in 2020 when we exited beta and then we shifted to making it work right well in order to get it to beta to be honest and then since then we have been making it work fast and that is what the emphasis is on right now. One of the main things we have been doing is implementing an entirely new sync API which controls how servers synchronise data to clients called sliding sync originally called V3 sync. The current approach that we have scales really badly. It scales with a number of rooms that you are in so a big account like my one with about three and a half thousand rooms can literally take five minutes to do an initial sync even longer if you are doing an incremental sync. Rooms should be cheap. They should be like directories in the file system and you wouldn't be happy if your file system read access slowed down just because it had lots of directories and likewise you shouldn't expect matrix to slow down just because you are in lots of rooms particularly as we are using more and more rooms for more exotic things like spaces groups of rooms profiles fees as reputation fees all sorts of things you want to be able to hop in and out of them with gay abandon. So sliding sync changes how the sync API works so that it only syncs data about the rooms which your client is currently showing so it scales with constant time of your room count and it does this by maintaining a sliding window over the visible rooms which you are showing in your client and as you scroll up and down it tells the client where to insert or delete or refresh the items in the list and so effectively doing so for the side pagination and it looks like this. So here is hydrogen in sync v3 mode and here is my room list and as I scroll around you can see it's pulling in new conversations on demand and if you look at the console you can see there are operations like deleted happening here and other operations as it goes and updates the information about what is being deleted and what is being inserted into the list of rooms that I'm in. But the really exciting thing you get on this is if I hit refresh like so that is an entire initial sync not even an incremental sync of my massive account so that would have taken about five to ten minutes on element and if I go to my network tab and I look at the timing of that initial sync there that took 190 milliseconds to generate so that is a 1200 times speed up over where we've been historically. Really exciting times we've got it implemented at the moment in hydrogen on a branch to play with but we're going to be rolling it out as rapidly as we can over as many other matrix cliances we can over the coming year. So you can actually play with it today if you want using the sliding sync implementation on themetrics.org GitHub project and it works as a proxy that converts sliding sync into legacy v2 sync. You can check out the hydrogen support I was just demonstrating. The core API is now focused on synchronising room data so all the other bits like two device messages, end to end encryption, presence, read receipts, ephemeral messages, account data, whatever else are now split out into extension MSUs which is a much better architecture than all being muddled together like it was on v2. But there is a caveat which is that just for a change end to end encryption is a real pain and the server has to send all of the end to end encrypted traffic to the client just in case that traffic includes notifications because the server obviously can't figure out your notifications for you if it can't read your traffic. So that is a slight fly in the oedement of sliding sync but for particularly large accounts and lots of public rooms it is a huge huge improvement as you can see. Another big thing we're working on is fast joins and this is probably our second most complained performance problem in matrix other than startup times which is that when you join a room over federation from a fresh server it takes forever. Like in Discord you can join a so-called server with 100,000 users in a second or two. I'll see you can probably join a big channel in a few seconds as well. On matrix if I try to join a room of 10,000 users from a fresh server even if it's the fastest server in the world it's probably going to take 10 minutes and it's going to keep erroring out every 90 seconds due to HTTP timeouts. So the problem is that we currently synchronise all of our room state before letting the user participate in the room which felt quite reasonable when we first created matrix because if the user can't see what's happening in the room are they actually going to be able to participate sensibly. But the problem is with a room with 10,000 users you could easily have 30 megabytes of JSON that needs to be parsed authenticated by phoning home to the server who originally created it checking their key checking the signature on it verifying if it can fit into the matrix tag persisting it all this stuff and it really really racks up. So what if instead we only think the minimal state that the client needs to participate so that means the most recent users who are speaking the ones you need to give the room a name if it doesn't already have a name any bands or kicks so you know whether you've been banned or kicked the state that you would need to authenticate those events as being correct and perhaps a list of servers in the room so that if you speak you know you know where to send the messages and then the joining server can go and synchronise the rest of the state lazily in the background updating the historical snapshots of the state when it's done and so this is msc2775 called lazy loading over federation and very early days in the implementation but we are at last making progress on it and it's giving us roughly a three x speed up and here is a video that vdh and dan provided earlier and if i try to play this then on the left hand side we have a lazy join of a massive room of about three four thousand users and it took five seconds to get into that over federation using complement our home server test jig simulator on the right hand side on a normal join we were still going and that was 16 and three quarter seconds until we got ui now this hasn't yet hooked up history to be lazy loaded so it still takes a while for history to filter in but it's still a free x improvement on being able to send messages and then 30 seconds in total to join for that particular use case we can do way better this is literally the first cut but it's still pretty exciting so continuing onwards another thing i wanted to talk about is matrix 1.2 so historically matrix apis were released separately so you had different releases for the client server api the server server api et cetera and we very rarely released a global update in fact we only ever did it once for matrix 1.0 now we've standardised instead on versioning endpoints per endpoint so each different http api literal url gets its own version number and then instead we released the whole spec as a monlef on a quarterly basis so there is no more slash r0 prefixes in our URLs it's now slash v3 and so matrix 1.1 we released in november last year and we are releasing 1.2 on febru the second which hopefully will have happened by the time you hear this and the big news on 1.2 is that we've merged in 18 matrix spec change proposals to turn them from being sort of de facto experimental things to actually ratified as part of the formal spec and the big news of spaces has finally landed as part of the official spec even though it's been out in the world for months now. Restricted joins letting users join a room if they're a member of a given space or room and also the matrix url scheme thanks huge thanks to kitsuno for pushing through this epic to formally define a proper registered matrix url prefix given our normal IDs and not exactly valid urls. Then lots and lots of work happening on improving client usability. A lot of us this is our biggest existential risk for matrix frankly. People use elements or other matrix clients and get confused by what's going on and we focus too much frankly on the protocol and the encryption and the decentralisation and we are desperately fixing that balance now doing things like reworking the UI of element to shuffle things around so that they make some kind of hierarchical sense and trying to go through fixing all of the other UX witnesses we have. Another big feature we are adding is threads using discord or slant style threads. This uses aggregations or relations to link events back to their room route and falls back to replies for clients who don't know about them yet. It's a simpler alternative to the threads we had in Surilee and although the two can coexist and you can try it right now in elements in labs it should be launching real soon now. Also lots of work around improving the encryption UI so investigating situations where you can't decryp your messages using better debug tooling also providing better UI when it goes wrong and actually having the tooling this time to fix and cross signing is due a massive UX overhaul. We are very aware that this is probably the weakest point of matrix right now and we are really working on it. Then extensible events really exciting stuff so we've always had the idea of expressing arbitrary structured information in events whilst providing fallback for compatibility and this could be things like a temperature setting here where you get the plain text and the HTML fallback as well as the structural data but finally we're starting to use this so we shipped voice messages this year and you can see you have a fallback that just says voice message and then you have things like the waveform to draw the pretty little voice message. We've got location sharing where the fallback gives you a textual GURI but you also have the structured data to show you where you are. We've got polls really nice because we have full fallback as well as end to end encryption support where people who don't know about polls in their client will see this message is Matthew going to run out of time for his talk at this rate I am and then the response is also done as a reference with a fallback so you can say this is the way is the answer. Next generation stuff lots of interesting stuff happening here I'm going to go as quick as I can and matrix Rust SDK is one of the big things here new first-class citizen client SDK we've been working on it for almost two years now the end to end encryption implementation is factored out as a separate crate called matrix SDK crypto and it's just really exciting because we can now embed it into existing clients like element android so we've put cotton and bindings around it and put it as a new encryption engine into element android and we call it element R as a code name this is an experimental approach but it seems to be working we're getting a seven-act speed up in end to end encryption performance and we'll ship it as soon as it's stable otherwise we're also messing around with matrix Rust SDK on iOS and web and as I mentioned earlier fractal next is making good progress on it too. Then we have the dozimac this is in massive news with rewritten libom our end to end encryption implementation in Rust so this is now the new reference OLEMMEGOM E2E implementation as of today it's actually pronounced for dozimac or something like that and it links natively into matrix Rust SDK in the near future you get much better primitives much better memory safety and thread safety and much better performance again I don't have stats on how much faster it is than the C implementation but I would expect it to be five six acts over what we've seen historically. Currently we are building out bindings to let folks easily swap out libom and we've just finished an independent security audit from released authority which we'll be publishing as soon as we can which will be an equivalent audit to the one that we did and OLEMMEGOM when we launched it originally. Then we also have lots of work on DMLS this is the decentralized version of messaging layer security the ITF proposal for group end to end encryption and this has potential to replace OLEMMEGOM with instead a tree of ratchets. It's very experimental at this point but it provides algorithmic improvements over OLEM and it could also help with our under-cryptible session problems but normal MLS needs a centralized sequencing function and DMLS is our extension that defines how you basically have each client manage its own MLS tree and then merge them together much as matrix works between servers. Dandright I mentioned earlier our next generation go server lots of exciting stuff here despite distractions with peer-to-peer matrix, lay-band with matrix, sliding sync and much other stuff so the progress does come in bursts. Lots of progress in Dandright 0.6 that we released last week we've switched away from Kafka to NAT, merged together lots of the microservices because frankly there were too many and the boilerplate was just irritatingly time consuming. We've refactored the room server a lot and lots and lots of performance and stability fixes and we're up to 94% federation test coverage. So we're using Dandright both the peer-to-peer matrix and keeping the large-scale server deployments in scope and it's looking really exciting. Talking of which, peer-to-peer matrix and pinecone lots of progress as well. Pinecone's sprouted snack which is a new linear routing scheme which massively stabilizes our routing performance and we're actually at the point at last of modeling adversarial attacks on pinecone. The pinecone simulator is turning into a full visualization engine and it's incredibly cool. Go watch Neil Alexander's talk later to see all the details there. The matrix slide however is blocked on Dandright so we've gone back to focusing on Dandright. Otherwise we just need to hook up, store and forward multi-honed accounts and improve federation and we should be in a position to really seriously start playing with peer-to-peer matrix and bridging to it as well. Finally almost, native void conference is another huge thing that landed this year. So we've now got native voicing video conferencing in matrix and extends the native one-to-one calling giving a decentralized encrypted group void and we basically took our one-to-one stuff, switched it to two device messages and defined a metaprotocol that would allow you to glue things together either as full mesh or using conferencing servers or decentralized cascade of conferencing servers. It will be launching very shortly and we'll be using it to power discord style voice video rooms in Alamant and Future. It also provides the VoIP backbone for third room which is the metaversal matrix in project that Robert who also has been doing the void conferencing has been working on. Finally, OpenID Connect, public service announcement. We are seriously considering moving matrix over to using OIDC for ORF. The current system is really reinventing the OIDC wheel and we don't benefit from OIDC's security improvements as time goes on. It encourages you to tight your password into random clients which you might not trust and it makes increasingly hard to implement ORF on new clients. It also doesn't allow you to delegate access to a client to a subset of your account. So what we're looking at doing is to have your server provide an SSO login portal very similar to what you might be used to with Google on accounts.google.com where no matter what bit of Google you're playing with you're always ORF in the same place. Similarly, no matter what bit of matrix you're playing with you would still ORF with your server in all circumstances. This again has been implemented as a branch on hydrogen and I watch this space. And then really finally beyond chat, we are finally starting to really look at building apps which go way beyond chat and void on matrix. So this could be decentralized file storage and there's a great talk from here about that later. It could be collaborative editing, it could be documents or Figma style drawing using native matrix CRDTs so not just journaling your CRDT updates over matrix but actually expressing your conflict free replicated data type on matrix itself. And then we're also looking at open metaverse solutions on matrix, the third-room project which Robert will talk about at the end of the day. So what if matrix stored both the data being collaborated on as well as the code? I mean we're at a point where in future matrix really could become the real-time web and your rooms could be anything from chat or voice, forums, message boards, collaborative documents, whiteboards, Figma, metaverse games, anything. You can basically do multiplayer with your matrix client with effectively become a browser and the rooms which you switch between would effectively be switching between these different real-time web experiences and that is quite a paradigm shift. So what's next? Basically doing everything I mentioned, also account portability and decentralized reputation at last, finishing, get a parity, particularly importing historical conversations which is looking really exciting. Huge bridging improvements on the horizon that a half-shot will talk about later, stud messages, pinned messages and obviously custom emoji because I can't believe we've got this far without standard custom emoji across the board. So we need help, don't use proprietary services for your chat, please don't use Discord for your open-source project. Run a server, use a matrix provider, build bots and bridges and clients, don't reinvent the wheel if you're building a new thing, follow us on Twitter or on Mastodon and spread the word. Thank you very much. The first one is one of metrics, sorry, issue is that servers can theoretically collect and analyze metadata. What role will metadata play in your near-matrix future? We don't have much time so I'll jump in. Yeah sorry, go ahead. You know the metadata question. So first of all you can obviously limit your metadata footprint by going and running your own server and on rooms which don't have other people in it, they're not going to ever see your metadata. So a really common misconception is that somehow a matrix is a visibility globally of metadata, it's just not true, it's only the servers which are participating in a given conversation will be able to see it. That said, we obviously want to minimize that footprint too and one of the things we've got is encrypted state events on the horizon, there is now an MSC encrypting things like the name of a room, the topic and other stuff which historically hasn't been encrypted and then for actual metadata peer-to-peer just forces us to solve this problem properly and there's a whole lot of interesting work there. Koji points out that peer-to-peer is still sci-fi and it might be a little way off but frankly metadata-resistant communication is pretty sci-fi and so we need to solve both basically at the same time. There's definitely a model where the servers end up being more of a store and forward system similar to Signal who really don't have visibility like Seald, Sender and Signal on who is talking to who and all of the metadata itself stacks up client-side peer-to-peer. I think here, second question, when can we expect to see sliding sink in the standard element clients? So you can play with it in hydrogen right now today, if you want to have it in element proper we're basically going to make it work completely in hydrogen first I think before we then go and do the very disruptive changes needed element to make it work. That said, I know that Adam on the element android team has been experimenting with it so we might see it land first in element android. Very exciting. You mentioned the security of it. Has it been published and if not when should we expect the publication? So we got the initial update on it last week and it kind of goes through all of the things they found. Some of them are legit, some of them less legit and we basically need to talk it through with them and then get a final one which will probably happen in the next week or two. Very exciting. So what about matrix or element being able to advertise to non-technical audience through ads, posters, internet, etc. Yeah, so this is kind of tangled up with how is funding of the project looking and obviously doing mainstream advertising like..go has been doing recently with literal bushelter adverts in London costs an absolute bomb and in order to do that well we don't want to just like raise money from investors and then probably spend it on bushelters. We'd rather instead make matrix itself better and more usable for a mainstream audience and the way we do that on the call team effectively is to try to make element the best flagship app that we can. Obviously everyone else is very very welcome to do a better job and do their own apps but the one that we're trying to guarantee is out there even if everything else goes sideways is element and then as element gets more usable by normal people then hopefully we can sell enough as element and fund enough hosting by EMS and sell it to enough governments etc that we can then take that cash and spend it on bushelter adverts and then you get a virtuous circle of matrix taking over the world but the minor thing that is missing there is getting it simple and glossy enough that people can migrate over from WhatsApp easily to it. Perfect, thank you. So back to the OD pretty quickly because we've got a few seconds left. What was edited was it just fake implementations? No it was literally the implementation of the Voodozzamats just like we did for Le Bon back in the day and it was a pretty good result honestly. Thank you. Thanks Brendan.
The Matrix core team is busier than ever, juggling hundreds of Matrix Spec Core Proposals and undergoing some major techtonic shifts as Matrix evolves into the ultimate secure decentralised communication network. In this talk, we'll give a high-level survey of the state of the core project, including: * How we're ensuring that flagship clients are as attractive as possible to a mainstream audience - and why we will fail if we don't. * How we're making Matrix go fast via v3 sync and fast room joins * How matrix-rust-sdk is becoming a flagship client SDK * How we're getting a full end-to-end security of the reference Matrix stack * How we're tackling abuse on the public Matrix network * How Matrix is evolving to use cases beyond chat.
10.5446/56898 (DOI)
Hi, my name is Hesham and this talk is called Adventures in Dataflow. We're going to be talking about Dataflow programming in the context of end-user programming. That is, programming done by people who we don't normally consider to be programmers, but who are programmers nonetheless as we will see. So let me start by quickly introducing myself. I'm a free software developer. I have been for a long time. I have created several projects. So depending on the community, you may know me for different things. I've been very active for many years in the Lua community, and I'm the creator and maintainer of LuaRocs, the Lua package manager. I have also created Teal, which is a statically typed dialect of Lua. But many of you also know me as the creator of Htop, and yes, people have asked me many times yes, the age in Htop stands for Hesham. Well and since now everything in life is about coding. Well here are my cats, Ada and Pascal. Yes, programming references again. And this is the music that I made. If you're interested in in LuaRoc, please check it out. But for the topic of this presentation, I think what's most relevant to mention is that I have a PhD in computer science from Pukarillo. And the topic of the thesis is Dataflow semantics for end-user programmable applications. The thesis itself deals with the theory, but the question that I have most interest in practice is how can we democratize computing? I understand that this is a very open-ended question. So I want to narrow down the focus here and specify that I'm not talking about access, about access to actual computing devices, because this has been spreading. We all know smartphones are computers and all that. But really talk about the power, about what can we do with those things, about being active producers versus being passive consumers. So to bring the point across, again, it's not a question of access, but a question of power. And when we talk about our relationship with computing, those of us who are of a certain age, we do remember that users used to have that power of being active producers rather than passive consumers. So this is one of the things that fuels that deep nostalgia for the 8-bit era of computers like the Apple II, Commodore 64, BBC Micro and all of that, because that entailed a sense of ownership and power towards the machine that users have since lost. And most importantly, this has produced a divide between end-users and programmers. And it's very unfortunate because we as programmers still have that ownership and power. We feel that we own the machine, especially in free software. And we also feel that if we need to change something, we can. And if we want to automate something, we can do a quick shell script and all of that. So it's a very different relationship that we have towards our machines. And even if we look at our relatives and people close to us and people who use computers merely and use their phones, and they have a completely different relationship with their machines. So we mostly took away that power from users as we made interfaces simpler and as we as an industry locked down things and all of that. But the key word here is mostly. We haven't removed all the power from users because if you look closely, you will realize that end-user programming is a reality. There is a number of programs in which users actually have computational power at their disposal and they can actually do things that were not originally planned by the application. And they have the feeling that they can come up with something new as they use the applications. And this is seen in all sorts of fields, not only with more obvious things like engineering, but also in the arts and music and all of that. If you look closely, there are many examples of applications that are themselves programmable and their users are really programming even if they don't realize it. But that is not the default. That's not the way that using computing devices is understood nowadays. Unlike in the 8-bit era in which you would turn on a computer and the first thing you would see would be basic and users would be simply driven towards it. I've seen that change happen and I've seen people who were able to actually do simple programming tasks in an 8-bit computer as the years went by and they became passive users of modern machines. They simply lost that ability and now they have a very different relationship with their computing devices. So how can we bring that back in a way that makes sense to computing nowadays? Well in my PhD studies, I have looked for success stories. I searched around for programmable environments that users love, that when I talk to their users I would see that they would essentially live inside those applications in the same way that for example, Emacs users essentially live inside their applications. So we've seen a number of examples of programs like that and we've seen the screenshots and here are some of their names. But the question is, if you look under the hood, what do they have in common? They are all based on data flow. So what is data flow? It's essentially programming via a graph of data dependencies. In computer science terms, that's essentially understanding a program as a series of function applications and the specification of a program as the connections between those functions, their inputs and outputs. So you might look at that and say, well, that's functional programming and well, essentially that is. Well, but don't get too excited yet because when we're talking about end user programming, one important distinction is that we usually normally don't talk about higher order functions. Which is the staple of functional programming. So that's essentially first order functional programming. So to get around that whole discussion, let's just call it declarative programming. Okay, most most important here is to make that distinction between declarative programming, which is something that users really pick up very intuitively and imperative programming, which actually takes some teaching. One thing that's important to point out is that data flow is really about the conceptual graph and not specifically that you have to have a visual language. For example, the spreadsheet cells that you see here correspond to the data flow graph that you see below. And as you can see, even the textual language of spreadsheet formulas expands to a data flow graph. And this example also helps to dispel some myths about end user programming. For example, that data flow environments are always visual programming. They're not. Some of them are, for example, nodes and blender, but a spreadsheet is also a data flow graph. And another myth is that users are afraid of text interfaces and essentially the language of Excel and other spreadsheets. It's entirely textual and it has over 500 million users worldwide. So you could say that Excel is the most used programming language in the world by far. And it's pretty wild to think that actually the programming language with the most users is a declarative one. So how and why do these data flow systems work? This is a two-sided question and the answers are related. If we look at it from the side of the how, how do these data flow systems work? We really talk about, as computer scientists, as programmers, we really want to understand what's going on on the hood, what's the computer science around them. And if we look at why do these data flow systems work, what I mean is why do they end up working well for users? Why is this the programming model of choice of successful end user programmable applications? As I set out to do my research to try to answer those questions, I came across this book by Borny Nardi called A Small Matter of Programming, which was published in the 90s, in which she did a series of user studies to try to understand end user programming and how did people relate to programmable environments. The results were very interesting. She identified three types of people who end up being related to the task of programming. The first type would be the end users. In an end user programmable environment, they will program, but they will be mostly focused at the task at hand. They just want to get the job done. For example, if they're a geologist or a musician, they really care about the geology or the music, not about the program. The program is really a means to an end. That doesn't mean that they are not advanced users. They're really experts in their fields. It just means that they really care about the program. They don't want to spend the time thinking about programming. They want to spend time thinking about their domain, about the project that they have at hand. I speculate that this is also why those end user programming environments don't really have higher order functions. It's not as much that the users cannot handle them. It's really that they want to be focused on the objects of their domain, not only programs themselves. So they don't really want to write functions that deal with other functions. They want to specify behaviors related to objects of their domain of expertise. The second type would be some sort of enthusiasts, people who end up enjoying programming itself. But they come from the background of the task at hand. Those would be the geologists and accountants or musicians or whatever who would end up taking some joy from the actual programming side of the thing. And they would naturally dive deeper into the programming aspect of things. They will take advantage of more advanced functionality that the programable environment offers, such as scripting. And finally, it will have professional programmers. Those will typically be the core application developers. Amazingly, as I started to research end user programable applications, I started to notice that all of them shared that three layer design that matched those three personas. You will essentially have a very high level language for end user programmers, a scripting layer in the middle, and then a core application that's developed by the professional programmers. In my research, I employed a classification of data for language that can be applied towards those end user programable applications so we can compare those languages that are described by their interfaces. And I also extended it with a few more categorizations of my own. And then I performed a series of case studies comparing all those applications across those various dimensions. It was remarkable how many of those choices were consistent across those successful applications. And the other options for design choices that were used, for example, in academic prototypes and things like that, really did not catch on. So clearly, there are some patterns that are more conducive to successful data flow design for end user applications. As examples of common designs, well, all of them offer some kind of iteration. All of them use unidirectional data flow. None of them have higher order functions, and almost all of them use a static graph for data flow. And all of them have a scripting layer. And amusingly enough, all of the languages are imperative. So once we understand how the semantics of a data flow language that's powering an end user program application should behave, and I'm not going to get into the details of that semantics here. It's all in my thesis, if you're curious. We can start asking ourselves a question. What would it look like? What would it feel like to have a default user interface that would be based on those principles, so that we could have a data flow paradigm underlying the applications, and we could combine them and have data flowing around and come up with creative things? So I would really like to see the answer to that question. But before trying to solve any kind of problem of finding the solution for programming for the masses, I realized that actually I would like to see that solve for myself. I realized that even us, even we programmers, we need something better than what we currently use as interface for our programs. Yes, we do have shells that have pipes, which are the most basic form of data flow, but essentially we're still on clunky tools, we're held back by legacy. So really, is this screen full of terminal emulators, emulating devices from decades ago really the best that we can do nowadays? I've always been the kind of person who gets annoyed by this sort of arbitrary limitations. And even like 15 years or so ago, when I started H-TOP. That was out of my frustration with top, in which back in the day, did not allow you to scroll through the list of processes, did not give you a visual representation of your memory or anything like that. So H-TOP was back then the best that I could do, and even that was limited by the paradigm of the terminal, trying to draw trees of processes with ASCII art and whatnot. So I told myself, well, what do I have to lose? Let's try to rethink the interface from scratch. And that's when I came up with UserLand. So UserLand is an integrated dataflow environment, which is inspired by the common core of dataflow apps. You can think of it as a shell for multiple types of applications. Each application is essentially implemented as a plugin towards the main app, and everything should integrate seamlessly. The current status of the project is that I have a prototype written in Lua that uses the Love2D graphics engine, which is actually a game engine, but worked really well for showcasing the ideas behind the interface. I'm currently writing the core application in C using SDL for graphics, and of course the idea is to eventually add a scripting layer and add support for plugins and additional applications written on top that can be written in any language. So let me show you a demo I prepared a while ago using the prototype. I think it's a good representation of the ideas put in practice. UserLand is a programmable end user environment based on dataflow. At first glance, it works just like the quintessential end user dataflow application, a spreadsheet. In fact, cells, formulas, references work just like one would expect. The difference from a typical spreadsheet though is that cells are created on demand. But otherwise their behavior is pretty normal. But UserLand is not a spreadsheet, or at least not just a spreadsheet. The spreadsheet functionality was loaded as a module and in fact was activated when we first entered a cell with an equal sign. We can switch out of the spreadsheet mode by pressing a key combination and then enter a different mode, shell. The shell mode implements an actual Unix shell. The current implementation forks commands off to bash so you have the full power of bash scripting available. However, just because we are using a Unix shell, it doesn't mean we have to be constrained by the limitations of a Unix terminal. As we see here, launching a long-lived command does not prevent us from continuing to interact with the system. We can move up, edit and relaunch commands while other cells are still running. The user can choose between adding more commands at the bottom, like a regular terminal would do, or just replace the command with something else. Notice how this implementation of cat displays the file with syntax highlighting. This is because this is not the system's cat, but rather a module implemented in UserLand. The idea is to gradually extend the capabilities of the shell with functionality that can make use of the enhanced environment. As we can see, running internal commands implemented via scripting is indistinguishable from running external commands from the system. Here we replaced cat with tack, which is a binary from the system that prints a file backwards. Let's switch back to cat now and let's try something different. Let's build a pipeline. We'll pipe together cat, tack and wc to count lines and voila! Each bit of the pipeline is now its own cell. It can be manipulated independently and it can also be edited and triggered independently. So let's modify the wc invocation and change it from counting lines to counting characters. Execution and evaluation are triggered according to data flow rules. Let's edit the command in the middle of the pipeline, changing it from tack to grep. Grep filtered the input, looking for occurrences of the word type and the count of characters was also updated. We can freely mix interaction styles. Here I move to a cell with the mouse and then I press Ctrl-L to clear the screen in typical shell fashion. For the next bit of the demo, let's switch to the home directory. And as I'm sure it would be expected from a shell environment that frees itself from the limitations of character graphics, UserLand does support graphics. Here we are seeing the show command which was implemented as a built-in in UserLand's shell module. The built-in was implemented in such a way as to integrate with the data flow mechanics of the system. Let's build a pipeline that begins with our built-in cat, which feeds data into the external program convert from the image magic suite, and then finally goes into our built-in show. Here's the result of our pipeline. Our show command at the end displays the converted image. The cell in the middle running convert has fed the proper data into show, but since UserLand doesn't know anything about convert, it just showed the data as text. We can use the built-in keyword quiet to get rid of that. Just like we did with our text pipeline, we can play with the arguments and see instant results. For the last bit of the demo, let's integrate all that we have seen so far. To press a key, I can reset any cell and switch it back to spreadsheet mode. So let's create a couple of cells and then add a formula. Just like the data flow relationships in spreadsheet cells are declared via their textual language, the same thing is possible in shell cells as well. Here we're going to use the spreadsheet cell identifier, adjusted to the shell syntax, of course, to control the argument for this shell invocation. It has a single unified data flow engine, so the spreadsheet cells and the shell cells can integrate seamlessly. The architecture is extensible so that new modes can also be created. To wrap up the demo, let's switch back to shell mode and demonstrate the communication in the opposite direction. So let's begin by displaying a text file and then extending that into a pipeline that counts the number of lines. That conveniently produces a numeric value which we can then use in our spreadsheet formula which will then propagate to the pipeline above. And that is all for today's demo. This is UserLand, an integrated data flow environment that allows different applications to be constructed and that will hopefully allow users to combine them in creative ways. Well I hope you enjoyed that and thank you for watching. If everything is going well with the conference streaming, I should be available now live for the Q&A session. See you there. Hey am I live? Yeah I guess we are so I'm not seeing a lot of questions in the chat right now. Let me see if I can see something in the main chat. So yes so okay I'm looking at the main chat right now. There's a... This is that coin? So yeah okay so there's one here saying like I was just about to ask if you could use spreadsheet cells in other modes and yep yes this is possible. One thing I have done in the extension of the prototype was adding a third mode already that's synthesizer. So that was a fun thing too. So you can do like calculations and feed them into like changing the waves and playing with all of that. Like the main idea is that really you should be able like for any additional mode that you add. I'm calling them plugins right now because I've asked like non-professional programmers what kind of terminology they like what they would more naturally understand. Like I basically asked if people like what do you want to how do you understand the word plugin and they basically described what a plugin is and then I figure out okay non-programmers know what plugins are and like I was originally calling them modules and module was completely like programmers speak to them with no meaning. So like I'm calling them plugins from now on. So yeah so the idea is that when you add a new plugin like you can get data in and out of like from any plugin to any other plugin. Let's see here let's see something else. Okay a question. As someone who is a long time user of Max, PD and PD the biggest issue is that you end up writing as much C as you do PD. In some way they become interface layers over C and the SDK is almost quite convoluted unlike scheme FFI's which are usually very easy. In fact once you use those environments professionally you start extending their capacities in C in order to interface with objects written in C. Do you plan to address this extensibility in a thorough manner and how would you go about making this process seamless if so? Yeah so that was one of the findings of the thesis is that a very like a successful environment like that and I make this distinction of successful environments because in the academia there were studies of visual environments and data flow environments and many of them looked many of them used very different designs but when I looked at the ones that were like industrially successful or successful as like as you see like Blender, LabView, Excel those kinds of things they tended to have very similar designs. So and I figured that well if all of those successful ones have similar designs well they must be doing something right. So one thing is that to address the question like the most successful ones they generally like they always have a middle layer which is a scripting layer which is like an easy to learn programming environment for like more heavy duty in programming than the one that you get in like with the basic interface but not as heavy as like going all the way down to C and those kind of things that require tool chains and all of that. So I think the answer to that question is that having a middle layer with scripting is just as important as having an upper layer with data flow based in user programming. So and let's see another question like when will the rewrite be available? I'm thinking about getting feature parity with the demo first so and so this is something that I have in mind this is my goal. Yeah, hopefully sometime this year. Another question do you think Jupyter could be used as an end user programming system with does that require too much knowledge? That's an interesting one because Jupyter is essentially a data flow environment if you look at the cells like that but then like the contents of every cell is generally always scripting. So yes, of sorts, if you think of Jupyter as another way of looking at a spreadsheet but generally the kind of knowledge that it requires like you almost always already have to know the scripting level right to get the most out of it. But yes, I think something like yeah, so I think definitely something like Jupyter is in the right direction. Okay, so Christine says obviously this approach is general but what kinds of projects would you like to see these ideas applied to most directly? So that's why I started with the shell plugin. So at first I would like to apply it to my own use so the kinds of things that I would like to use it myself would be like as a replacement to my terminals, as a replacement to my text editor and those kinds of things and as a replacement to a file manager. So generally like managing your local offline environment and so that's my very selfish like initial goal but I could see something like creative programming, those sorts of things that you would do with processing and like more multimedia stuff. I could see that integrating more easily as well and even like creating visualizations to the other possible plugins and those kinds of like fun stuff. So I would probably start from this avenue and then branch out into other kinds of end-user applications. So is user-land-stand-alop app or something that runs in the CLI? Right now the prototype is a Love2D Lua program and the rewrite is a C program in SDL so yeah so it's a program that you launch and then like you can go full-screen or you can run it from a window. How do you deal with circular dependencies? Yeah in the data flow graph it's interesting in the prototype it crashes and in the rewrite it actually feeds back and loops nicely so if you do a so right now if you do a circular for example a circular sum and a spreadsheet itself it keeps like increasing so if you do like a one like one and then two and then you change one to like a two plus one and a one plus one you see the number is increasing so it's like a live environment and that's another one another comment here. Now that I think about it the Max isn't a data for environment but it does tend to be a gateway end-user programming despite this. Would you agree and if so what do you think that might be? Yeah I think a Max is definitely a gateway from like a text editor into the scripting layer very similar to what I mentioned like in a comment above that when Christine mentioned that Blender tends to be a gateway to word scripting. Generally I would think that well we all know that like the in Max users tend to be Linux users, Unix users, Free Software users like many of them are already programmers so yeah maybe those are programmers who are not used to idea of programming like to end user programming and programming their own applications so but I don't usually see like lots of people like from other walks of life like using Emax casually and then getting into programming through that but you will see like you know artists using pure data and then learning programming or Blender through that right so yes I would agree that Emax is a gateway towards end user scripting but probably not as much end user programming at that highest level so yeah another comment here Max and pure data used similarly to the way Max users using Max yes that's very much the point and I've talked to users of all of these other applications and indeed like if you look at the way like a power user of Excel users Excel yeah they really stretch the limits of what's possible in there so yes it's very similar the way they live inside the app and when they love the app and the way they use it for like things that wasn't originally designed and things like that it's very impressive. So another question said so the plugin could be understood as an interpreter component for the data you would like to use is that right yes that's right and one thing that I would really right now you can kind of sort of do it with the shell mode since you can basically run call in interpreting with the in a shell plugin but yes I would like to see plugins that directly implement like directly plug in interpreters for scripting languages so that you know you could have like a snippet of Python or a snippet of scheme or what are our favorite language in there and have data flow in and out and you can like connected via the interface seamlessly. Another one as the developer of practice live certain bias not having the scripting layer how much do you think APIs can get around those layers well in my experience looking like studying those successful apps they tend to all have a scripting layers so and one thing I've noticed is that having the scripting layer makes you avoid the temptation of making your highest level and user and language too complicated because then you can just you know draw a line in the sand and say okay if you want something more complicated than this then we can move that to the scripting layer so that you can keep the end user level well coming to newcomers and things like that so that it does not require lots of specific programming long knowledge because like if you look at it data flow if you it's it's I guess we're running on time so yeah we can we can continue.
"How can we democratize computing?" — that is a sentiment echoed by many people, especially those of us in tech who realize what the general audience is missing out from computing devices, when they use them as passive consumers. If we think of computing devices as programmable tools for active exploration, how should they look like? What programming model should they be based on? It's fair to assume that it shouldn't look like what passes for regular programming nowadays, since that is clearly out of touch with users. Perhaps something more... declarative? ...minimalistic? So, how can we bring declarative and minimalistic computing to the masses? And... maybe we have already?
10.5446/56900 (DOI)
you Hello everyone and welcome to the declarative and minimalistic computing dev room at FOSDEM 2022. Like last year, FOSDEM 2022 is an online event. The last 3 years have been really hard for everyone and as we continue to learn living with Covid, he is hoping we will be able to meet you all again soon. This talk was prepared by Oliver, Piotr and yours truly. Professor John McCarthy once said, Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their programs as a servant whose master the user should be able to control it. If designers and programmers think about the apparent mental qualities that their programs will have, they will create programs that are easier and pleasanter, more humane to deal with. With his work, Professor John McCarthy pioneered artificial intelligence as we know it today and gave the world the gift of the list programming language and all the list dialects that were inspired from it. In a sense, he kickstarted the modern computing era. But his words were forgotten. Today's programs try to control how the user interacts with them and the tools we, as programmers, use to develop are becoming more and more strict on what we can do and how we can do it. Let's discuss a very common scenario. We will assume you want to develop an Android app. The official way of developing an app is through Android Studio. You are writing your app, you compile it, you run it, hoping it will do everything in the expected way. And of course, it will fail. It will fail because something happened that was not supposed to happen. You will then have to debug the program into correctness by following the state of every variable to figure out what is causing the issue you are having. Is this what we really want? But if I could tell you, there's another way. You can write code that will state what the desired outcome is. It will get an input, it will return an output, and it will not touch the state of anything else. And all that while being able to output these changes live, allow the developer to directly see what they are working on. This allows for much faster prototyping in early stages, while still being able to reason exactly what the code does. And of course, later, deploy to production without having to rewrite everything, as is the case with another language named after a snake. This is why Lisp is the second oldest language in existence, because of the power it gives to the programmer to do what needs to be done, without worrying about how it will happen. Lisp frees the programmer to think of solutions, not problems. In companies, it is said that one should worry about the competitor only when they hear they are using Lisp. What is Geeks Europe? Well, Geeks Europe is the legal, non-for-profit association with the aim of providing the legal home for the Geeks project. Geeks Europe, just as stated, is a legal, non-for-profit base in France, with most members consisting of core contributors, or at least persons who are strongly interested and passionate about the potential that Geeks has as a software platform to resolve problems or issues in the computer and software domain. So please consider joining the association to protect and help advance the state of Geeks and declarative and minimalistic computing languages for that matter as well. Now, any questions? We hope that you will enjoy all the other really great talks from our speakers, and please make sure to watch the talk from William Burd called the Rational Exploration of Markarthnes. Now we will be available for any questions. Thank you for watching this talk. We hope that you will enjoy all the other really great talks from our speakers. Please make sure to watch the talk from William Burd called a Rational Exploration of Markarthnes. Now we will be available for any questions. Okay. Thank you everyone for coming. I hope you will enjoy our lineup of the really awesome talks that we have. Ideally, I wanted Oliver to be online right now, so he could continue with his talk. But unfortunately for technical reasons, he's not here. Oliver, if you can hear this, please join. Yeah. So is there anything you... Okay, so I'm a bit... Okay, so a question from Piotr. What was the hardest thing about organizing this day? So it's... First of all, you have to make sure that you define the purpose of this dev room. It is important to make clear what can be included and what is the purpose of this dev room. In our case, we want to support any language that promotes what we discuss in this talk. So... Yeah. Yeah. And like we need to... Then when we have decided what we want to consider, we need to make sure that we attract the best speakers we can get. And as you will see in the rest of the day, we did. We have really good speakers. And yeah, by the way, it would be really nice if you could vote the questions so I could see them on the right. Because the way the UI works, I have to go back and forth. Yeah. Anything else? By the way, it's also a good opportunity to mention Geeks Europe because Oliver would want me to do that. Which is... Give me a minute. Christine, thank you. It's really nice to hear that. Like we are trying every year to improve and grow as a room. And to attract even better talks. And also, something really important for our dev room is to be as open as possible to everybody. We try to... openness is really important for us. And shortly we will start to continue with the next talk from JJ. So about McCarthy, I think that the video makes a much better work of giving him a really short summary of what he did. McCarthy is one of the people that we were lucky as a human species for his work. And we wouldn't be here with all... We wouldn't have all these languages if he didn't start with all of this. Yes, please make sure to watch William's talk on... William's talk later today. William is one of the best speakers we have. We do have a lot of... all of our speakers are great, but William is really good. Well, Python, I try to avoid saying this name here. Also, to avoid anybody thinking that I may have taken a dig at Python. I cannot confirm anything. From experience, like from all the projects that I have worked on, every time we started with Python, we ended up rewriting everything, something else. In our case, it was Clozure. Yeah, so later I will share a really nice paper on this... on this room about writing in a language that you can prototype and then use in production very literally, right? And Piotr, thank you for making me a bit less nervous. Right now I still... Okay, it should be less than a minute now. So if you want to ask anything else, or I think that's... It's like the prototype to production, it's really important because at the beginning you need to be able to prototype fast. And then, like I have seen many times in many companies that things that were supposed to be prototypes, they could be production for much... Like they might get to production without any rights and stay there and be really slow. And you're like, okay, what happened? Why is it so slow? And this is how we end up with really city services. Then somebody has to rewrite everything from scratch. That's what we solve with this languages. Okay, people, thank you.
Welcome to the Declarative and Minimalistic Computing Devroom. In this year's virtual conference we will honour the late Professor John McCarthy as the founder of AI and the inventor of LISP. McCarthy with his work pioneered artificial intelligence, developed the Lisp programming language family and kickstarted our modern computing world. Lisp is one of the two oldest computer languages in use today.
10.5446/56903 (DOI)
Hello, my name is Toto Sönnigstson and I am here to talk about designing a programming language for the desktop. I am a researcher at the University of Copenhagen, where I work together with my colleagues. I work on a programming language called Futhark and its compiler. So Futhark is a language in the ML family like OCam or Haskell and you program it much like your program language using both computer and arrays and then we have a compiler that can turn the resulting programs into very highly optimized code running on GPUs or multiple CPUs and that compiler is what our research really is, what we publish papers about. But it's not really what I'm going to talk about today because the Futhark language is not meant for Futhark applications, it's not meant for the small performance critical part. It's a pure language that you cannot inside of the outside world or a user in a way you know we write function that are then called by a program written in some other language. And that creates some interesting constraints on the language design and I'm going to talk about some of those principles and some of the constraints today. Although my focus will not really be the language itself but more of the tooling around it. So building a program language of any kind takes a lot of fues because the average number of users for all or program languages is about zero and languages are not, they know this. They know that when you create a new language, odds are that language is not going to succeed or odds are no one is going to use it. But clearly there are pigeons going on there. No one wants to create a language that no one uses. So most languages are designed with the hope and the plan of succeeding eventually because they tend to have very large domains of being completed in general purpose. You can use them for anything and they are built with the idea that they must scale to large teams and large programs which means you might have or need complicated build tools and deep hours and package managers and maybe one of those sufficiently smart compilers so you add complicated language features that you have no idea how to compile efficiently but maybe someday in the future when the language is a huge success enough people will be working on a compiler to make it run fast. And also you assume that most of the users of the language will have it as their main language so they will have the time and the motivation to learn all of the subtle details about how the language works and how to use it well. So in a sense these are languages that are meant and designed for a resource rich environment. And I don't mean the machine resources, maybe this can still be a language that is meant to run on a very small computer. By resource rich environment I mean that the plan is for the language to eventually have a large number of users who can put in a lot of time and a large number of maintainers who can also put in a lot of time so you can create advanced tools and expect your users to spend time to learn about the language. And companies think like this when they push a new language they assume it will be a big success but hobbyists also do so. And there is nothing wrong with that. So this table shows a handful of languages that are kind of general purpose and that eventually these languages would all want to take over the world or at least to remain and be useful for everything and they are complicated and while they might not be there yet they are designed with the idea that eventually they can have complicated tools, they can have advanced compilers and many users and rich ecosystems. And some of them have now become popular enough that such sufficient tooling may actually exist. Some because there was a big company that just put in enough resources to make it happen no matter what like with Swift. Others like Rust just became popular organically and are starting to grow pretty advanced tools because they are so popular. There are lots of people who want to write tools for them. So these languages really want to grow up to be this. They want to be tigers not because they want to eat other languages or maybe they do but that's not my point but they are supposed to thrive in a resource rich environment like the jungle where you sure you can have a big muscular body you can afford to spend lots of resources because there are lots of resources available. You can always get some more. And sure many of the languages will not grow up to be tigers. They will die. They will have no users at all but that's not the goal. So what about Futhark? What about our little data parallel language meant for high performance computing? Well, that's a small domain and because Futhark is not general purpose you can't write full applications in it. The most programmers who use Futhark mostly use some other language and then just want to speed up some part of the program by rewriting that in Futhark and integrating into a larger program. That also means that Futhark will always be a guest in a larger code base that is mostly not written in Futhark. And this is not a resource rich environment because even if Futhark wins and sure I have just as much hubris as other languages sign up. So of course I hope and expect that Futhark will win and take over the world or take over its domain. That's really a tiny domain and that we will still even when we win we still won't have that many users. Our users won't have that much patience or time to spend on the language and we won't have that many development resources. And that creates some interesting constraints for how we've designed the language. So this is Futhark. Futhark doesn't live in a resource rich environment. It is in a desert. It's not a tiger, it's a hedgehog. It must conserve its resources and spend energy only when absolutely necessary. It must really make sure to maximize the rewards for its investments. And the approach we've taken is a kind of conceptual minimalism. So when we design the language and its tooling and its ecosystem we try to minimize the number of things that require ongoing incidents such as servers that host packages or documentation or whatnot. We try to minimize the amount of implicit behavior in the language itself and its tools because our users will probably not memorize or study the tools or language enough to recognize implicit behavior. So everything should be as explicit as possible. We try to minimize the degrees of freedom to limit the amount of choices that our users will have to take. And we try to minimize novelty except we're absolutely necessary. So of course we don't want to just create a language that is a clone of JavaScript. So there will be some novelty. But we try to make sure that we only do something unusual while it's really crucial to our value proposition which is high performance execution. And really the core is we have to do just a few things so we can do them well. And we end up saying no to things that are good ideas in most languages. So what I'm arguing here is not yet that when you're designing a language for the desert you should just do a really good job. It also means you need to make some harsh trade off say no to things that in a more resource rich environment those are good things but you just can't afford them when you're a desert language. So let's look at some concrete examples of what that actually looks like. So first let's talk about build systems and multi-file programs. So FUD is meant for small programs but we still want to be able to support splitting a program into multiple files. If nothing else then to support third party libraries as we'll see also. Now I don't think any program enjoys learning about build systems or how imports in a programming language results to files. And to make sure that we don't ask people to learn too much new stuff about in FUD. Our principle here is that the easiest thing to learn is something that you already know. So in FUD we create a very strong relationship between these import statements and the file system. So when in FUD you do an import of FUD slash bar then that imports exactly the file FUD slash bar dot FUD relative to the importing file. And that means that the semantics of importing are just exactly file system imports with no hidden details and no other rules. And all uses of code and other files except for a very few build in things must be through an explicit import. So there's no implicitly available environment or anything everything has to be explicitly imported. That's boilerplate but it means that there are a few rules to memorize. It's obvious by looking at a file what other files it's using. One downside of this very simple design is that files have no canonical name. For example let's consider this directory tree where we have a main dot FUD and then we have two folders that contain other FUD files. So if we want to import the bar dot FUD file in the FUD directory then if we want to import that file from main dot FUD we would say import FUD slash bar. If we want to import it from bas dot FUD we would just say import bar because it's in the same directory. If we want to import it from bas dot FUD in the kubes directory then we would have to use two dots to move our directory then enter FUD and then import bar. So that means that the same file bar dot FUD has three different names in these three different imports. I mean it's not surprising really because it's really just file system semantics. Everyone who knows about the hierarchical file system will understand this but it is a problem that files don't have a canonical name. So this is a trade-off that we made in order to make everything as explicit as possible but it might not be the right choice for every language. So one thing I should mention is that this is not textual inclusion like C's include. Each file must still be syntax and type correct by itself. The compiler doesn't just paste the contents. It is still a well behaved import resolution mechanism. And one subtle detail that makes this really work is that there is no search path set by some built tool config file or include path in the C compiler that makes certain directories magically available as search rules or whatever. All of the imports reference concrete physical files that must be in the file system relative to the importing file. And to compile a FUD doc program you didn't just run the compiler on the main file that then eventually imports every other file in the program. This has a really interesting advantage which is that if the program by itself is the type correct, can compile, then each constituent file can also be used directly as a compilation rule by passing it directly to the compiler. Ensure that might not result in interesting programs for all of the sub files, but if you pass them to the type checker instead, that will also work. And then you can get back for every single file by star by parturing those as the compilation route, a syntax checked or type checked syntax tree for that file and everything it imports, which makes it really easy to write very simple get functional editor tooling in our eMax mode. For example, it doesn't worry about whether a file is part of a larger project. It just takes the file you are editing, passes it directly to a command called FUD doc check, which just runs the type checker on the file and handles its import and uses the information that comes back to implement things like the type under the cursor and go to definition and these other nice things. Nothing particularly fancy. A language server could also do this, but it was super easy to implement and requires almost no maintenance. So it's not really the right choice for any language. Some of one down that is that there's no notion of shared library since all paths must be relative to each file. There's no way to just put something in a system location and have it available to everything. And when you install third party packages, they must be put in a known and accessible location relative to the program as going to use those packages. So actually, let's talk a little bit about packages. Language package managers solve some fairly tricky problems. The two main ones are how do we find the packages and make them available to the compiler? And how do we deal with conflicting version bounds in dependencies? This can get really complicated. For a central register of packages, we probably need a server like Rust has its crates IO, but I mean, we're desert language. We don't really have the time to spend on maintaining a server, especially if that server has to run custom software. Also, most package systems allow both upper and lower bounds on dependency versions. And that turns out to be an NP complete problem. So we need a fairly complicated solver. It's difficult to implement efficiently and worse when the solver fails because the bounds are in conflict, then it can be really difficult to explain to the user what actually went wrong and how to fix it. For example, Rust solver in cargo is thousands of lines of code. It's not easy to implement this well. So for our package manager in food. We went with the simplest thing to possibly work. It's really not more much more than a glorified file downloader. To add a dependency on some library, you run the command food package add and then you provide a package path. Currently, that package path must be the name of a repository and either GitHub or GitLab. But we intend to make this more flexible in the future. The most important thing is that it must be able to look up the available versions somehow. When you have added a dependency, you can actually make it download the files corresponding to all of your dependencies by running the food.cig. That will populate a directory called lib, which for example might look like this if we had this a dgoo.dk.salt library. What we can see is that the lib directory just contains food.salt code, just files. And we can see that the source library itself dependent on another package called segmented, which is then also contained in here. And from within our own program, we would then just import these files like any other file. We know they're in the lib directory. We know the names of the library. So we just write the corresponding import statement. It has a little bit clumsy, but it's fully explicit. So it's very easy to understand what's actually going on. And when it fails, it's also easy to understand why it fails. And if you want to do rendering, you just commit this lib directory to your repository. Nothing particularly. I think it's a little bit ugly, and maybe it is, but it's kind of obvious that it works and when it doesn't work so well. So package versions, again, we went for the simplest thing that could possibly work, and that is git tags, which is not unusual by itself. So to release a version of your package, you just add a tag to your repository and you push it to the repository. And a given package can depend on an eminue version of another package. And package manager, of course, also downloads dependencies of dependencies. So how does it handle cases where you have different dependencies and then they, in turn, depend on different versions of the same third dependency? Well, this is where the many package managers end up being NP complete or trying to solve an NP complete problem. But I was inspired by Russ Cox from Go who came up with a really simple system for the Go modules system that they added a few years ago. He came up with what's called the minimum package version algorithm. And the real trick here is that instead of trying to use the newest version of an available package, it tries to use the lowest version. So the oldest package or the oldest version that still satisfies all of the constraints. And then he says simply says that you cannot constrain a dependency on our bounds only lower bounds. This works only if you basically never break backwards compatibility. And if you break backwards compatibility, I'm modifying the major version number of this in semantic versioning, then that really just counts as another package. Complete on rate to the old one as far as the package manager is concerned. And this is a really, this is a design of many tradeoffs. So this thing about not being able to break compatibility, that's harsh because breaking compatibility in small ways and coping by using upper version bounds is very common in most package ecosystems. And then this doesn't support that at all. But it has a lot of pros. For example, Go uses it. So it's not physically flawed. It is, it clearly scales to fairly large ecosystems. It means solving for dependencies is reproducible even without freeze files. The only way this solver can fail is where if a package doesn't exist, so you won't have a problem of reporting incomprehensible errors to your users. And the implementation is extremely simple to show just how simple this is the implementation of the solving algorithm in the Futter compiler. Now that's makes for stuff the actual code for communicating with GitHub and downloading files and so on. But this really encapsulates the actual solving algorithm, which is amazing. It's very, very short. And this design that we use for Futter package isn't really Futter specific at all. One of my colleagues used this design to create a package manager for standard ML and in total that ended up being a 1500 line SML program. So you could pretty much cover the design for another language and it's very easy to implement. So those examples of language design for the desert are mostly about things that are kind of peripheral to the language itself about how to use libraries and how to structure the code. And a few other examples of where we went with this conceptual minimalism approach is that Puthack is based on a familiar program model. It's basically the model you would find in Haskell or ML. MapReduce, scan, high order functions, type inference. It's not the most popular program model, but we know that people can learn it and we teach it to students all the time and it's not particularly difficult to learn. There are a few places where we actually have language novelty but we are very careful about when to introduce it, only when it gives us an enormous advantage. Instead we put all of the novelty and I mean we researched it so there has to be some novelty in the compiler itself, which is hidden from the user. It just, it's a black box, you give it a code, hopefully it does a good job and it spits out a program that runs fast. Another example is that we support very few compiler options because when you add options to the compiler, especially options that affect code generation, then you end up with a combinator explosion of different code parts and it's, that is really difficult to test and again, we're in the desert, we don't have extremalized testing resources. For a fun game, try to randomly generate some optimization options for GCC and see if you can compile a Linux kernel that works using those options. I mean, GCC supports an enormous amount of options and probably many of them, not every combination has been tested together so probably you can find some bugs that way. So in conclusion, designing a program language for the desert means coping with persistent scarcity of both users and maintainers. All languages start out small, most languages hope to eventually grow large. If you're designing a language for the desert, you know that it will never become large and that means you have to make some trade-offs. And the main trick here is just to keep it minimal and that sometimes means making choices that you would not make for a language that's supposed to be popular. And you have to be to realize that some things you will not be able to afford. Maybe you will never have that on one language of implementation. But then how can you perhaps find another way to design a language so that someone can compile a language that will go to a definition tool in the afternoon. So if that sounds interesting, why not take a trip in the desert yourself and try out our language. I think it's really fun to use. Okay. So the QA has started according to the message I'm getting. So I'm going to start answering some questions. The first question is about canonical file names. Why don't we just start from the top-level directory so that all files will always have a unique import name? That would be better. Unfortunately, the problem is if you're just opening an arbitrary file in your editor or passing an arbitrary source file to your whatever, rebel or something, what is the top-level directory? Then you need to find that top-level directory somehow. Maybe there's a configuration file. Maybe you can recognize it by the presence of a file with a special name. But that's more logic, more things you have to build into your tools. And in my case, or in our case, we didn't want to have that kind of logic. But for most languages, what we did here would not be the right design. But we think it is for us because it's simple and requires less maintenance. The next question is from Efraim, an apologies if I put you the pronunciation, who asks what happens if you have a package dependency loop? So I guess this means you have a package A that depends on B, like that depends on A. So in our package manager, that would be an error. When you try to install one of these packages, you'll be told that's impossible that we say a loop. And again, maybe that's the right thing in general. Maybe it isn't. There may be situations in a large program, in a large community where it makes sense to have mutually recursive packages. But when you are starved for resources, like we are, and you intend to always be starved for resources, sometimes you have to say no, even to features that would be useful to someone because they raise too many questions that you don't have the resources to answer. So again, it's not about me telling you what's the right way to do a language or it's tooling. It's about realizing that when you have limited resources, sometimes you need to say no to things that you might want in a different context or if you had more resources. So that's another question by Blake 2B, who asks about graphics programming and says that Puthug seems somewhat similar to GLSL, the open GL shader language. I wouldn't say it's particularly similar. So GLSL, in GLSL you specify the essential sequential code and then you run that sequential code in the context of a shader which can be on fragments or vertexes and so on. I'm not actually much of an expert on that. Puthug is much more high level and also that you express the structure of the parallelism. Now I would actually say that if you can express your program within the confines of GLSL or the shader languages, whatever Vulkan has, something like that, then that's probably better because they are simpler and obviously they are more well used. Puthug is for when you actually have an interesting complicated parallelism in your program. So writing a renderer maybe isn't an interesting use case for Puthug but constructing a bounding volume hierarchy to describe the scene of whatever you are rendering. So accelerate the rendering, that might be more interesting because that's more complicated to do in parallel. So Puthug only makes sense once you have a certain amount of complexity in your parallel algorithms. For very simple things, just use a simple language, even simple language. More questions? Someone is asking, well, anyone has used Puthug for audio or graphics video programming? I think so. I don't know about video. So generally Puthug is not a good fit for real-time graphics, I think, because it doesn't hook directly into the actual graphics board of the GPU. It's a general-purpose computer. You can use it for graphics but it's probably not going to be as fast or as low latency as using Vulkan or Metal or OpenGL because that's not really what it's built for. It's built for bulk data parallel programming.
You need a lot of hubris to design your own programming language. As a result, new languages are often engineered (or "over-engineered") for that glorious future where millions of programmers spend their lives working with the language, and a small army is maintaining the compiler and related tools. But how would you design a language that assumes this bountiful future will never arrive? A language that, even in the best of circumstances, will always be obscure and secondary? Futhark is a programming language designed for a very specific domain: high-level, deterministic, data-parallel number crunching. It explicitly disavows general-purpose use, and it is absolutely not possible to write full applications in it. Thus, even if Futhark somehow managed to become the largest conceivable success and completely dominate its domain, that would not translate into very many programmers. And even then, it would at best be a secondary or third language for most of its users. In this talk I will talk about how such a perspective has affected the design of the Futhark language and its tools. To a first approximation, this is just the "principle of least surprise" applied to every part of the language and ecosystem. As a niche language, Futhark's novelty budget is quite limited, and its users will not have the inclination to learn about syntactical subtleties, elaborate package managers or build systems. At the same time, it's trying to innovate in a challenging domain, so some things definitely will have to be novel. Balancing these concerns has been interesting, and my experiences are perhaps even useful for designers of languages, tools, or systems in similar situations.
10.5446/56906 (DOI)
Hi, and welcome to my FOSSTEM22 talk on Managam. I will talk about the design of a pragmatic fully asynchronous microkernel. Let me first give you some background. I work in academia currently at Humboldt University of Berlin. I'm currently working on distributed graph algorithms, which is a research grant by the German Research Foundation. But today I'm talking about an open source project, Managam, which is an operating system. Today we'll talk about its kernel in particular. It's an open source project. It's written in modern C++. It has been active for quite some time and it has many active contributors. Let's first look at some screenshots to demonstrate the capabilities of Managam. So here we see some screenshots demonstrating on the one hand GDK running in Western, which is the Wayland reference implementation. We also see some OpenGL application here, some X11 application. Let me now explain what Managam is all about. So Managam is a general purpose OS. It has a focus on asynchronous IO and by asynchronous or fully asynchronous IO, I mean that basically all system calls or almost all system calls are asynchronous. So user space can start many system calls without waiting for them to complete. And in the meantime, it can do some work on the CPU. And using that design, it's also easy to make all the drivers asynchronous, all the servers asynchronous and so on. Managam also has a good source level compatibility with Linux that we achieve by a emulation in user space. I should note that this is really source level, so you need to recompile your application to port it to Managam and we can just run a random Linux binary on Managam that doesn't work. The advantage of the asynchronous design is that we can handle high numbers of concurrent requests using fewer resources. So for example, using fewer threads and therefore also fewer RAM to store the stacks of these threads. We also need less CPU time. And that is really good if we look at servers. That's really an advantage for servers that need to handle high numbers of requests. The OS also works on desktops, on mobile devices and so on, but we really assume that you have a few megabytes of RAM, so it's not really suited for the tiniest microcontrollers. Of course, you can also use the OS to play Doom if you want to. It would be pretty disappointing if there was no Doom port. Okay, so the remainder of this talk will be split into two parts. The first part will look at the overall system architecture and the second part will look at inter-process communication, so IPC. And we do that because IPC is one of the most important features of each microkernel and each microkernel typically has its own approach to IPC. So we look at Managam's approach in particular. Okay, let's talk about the general system architecture and what kind of components the OS consists of and how these components interact. So first I want to talk about in what sense is Managam pragmatic because that's part of the title of this talk. We first look at L4, actually, a different microkernel which follows a kind of minimality principle which says that a concept is only tolerated inside the kernel if it can't be moved out. Basically, if any other implementation would make the system unusable or unsafe. That means that the microkernel should provide safety features, so it should provide proper isolation of tasks and it needs to provide means for the tasks to communicate. But everything else should be handled outside the kernel. We are not quite as strict as L4 in that regard. We also allow concepts outside the kernel if they enable the implementation of user space APIs in a fundamentally more efficient way. And fundamentally more efficient means here, for example, that we can save context switches that would be necessary otherwise or some other techniques that allow us to do less system calls and stuff like that. I should point out that although we allow more stuff in the kernel than, for example, L4, we still try to have general concepts in the kernel and not specific APIs. So for example, while we may have an API that accelerates file operations or, for example, memory mapping of files, we do not have a concept of files in the kernel. We basically only provide the tools that enable efficient implementations in user space. And if we compare the kernel of Managam to the kernel of other OSs, I would say that it's similar in scope, so basically in functionality of the kernel itself to Google's Zircon kernel. It's a bit bigger than L4, but yeah, it's still quite small compared to other kernels. Okay, I will very briefly mention three of these performance-oriented features. I will not really go into details. You can ask me after the talk if you want to know more. So the first is copy on write, as you see in fork. The kernel in Managam does not really implement fork, but it implements copy on write memory. The kernel can also implement page cache memory, so that is the memory that you get when your memory map files into your virtual address space. And we have some fast handling of interrupts of IRQs in kernel space that can avoid the context switch to user space in certain cases. Now I want to take a look at a small block diagram that depicts what the components of the OS actually are and what their purpose is. So we have the kernel which implements of course IPC, that is more or less clear every microkernel implements at least some kind of IPC. Without any IPC, the user space could not do its duties in the microkernel environment. We also have some basic memory management. It's also not surprising. You also have thread handling, again, not surprising. We do have scheduling inside the kernel, even though some microkernels implemented in user space. But the advantage of having scheduling in the kernel is that we do not need to switch to user space to do a scheduling decision. User space can still set policies and still set priorities to basically control what the scheduling does, but the scheduling algorithm itself is implemented in the kernel. The kernel also has some basic clock driver to keep track of monotonic time. So it's not a real-time clock, but rather a monotonic one that measures time since boot. Why do we have that in a microkernel? Because it turns out that some devices, for example, some drivers do need this quite early during boot, and they need it before we enter user space. So it's necessary to have this in the kernel. We also have ACPI handling inside the kernel. This is basically the interface to the firmware or on ARM or RISC-5. We use DTVs or device tree binaries, and we use these technologies to perform device enumeration inside the kernel. So that's basically what the kernel does. Of course, because we are microkernel, there are no real drivers inside the kernel. Only the basic infrastructure is inside the kernel. Everything else is in user space. So let's look at user space. Something that is below this line here will basically be user space. So we'll not have special privileges, we'll not be able to access the full instruction set of the CPU, because it will not be able to perform privileged operations. First, we have the POSIX sub-system. So this is where we implement most of our UNIX emulation. You could have other subsystems too if you wanted to, but right now we only do UNIX emulation here. This POSIX sub-system implements processes, for example, because the kernel only has a concept of threads, but not processes. Processes are in UNIX collections of threads that share the same address space and share some other data structures, for example, the set of open files, the open FDs. The POSIX sub-system also implements pipes, it implements the file system, the virtual file system at least, with all its mount points and stuff like that. We implement stuff like E-Pol, so interfaces like E-Pol, also timer FD, signal FD and similar interfaces and we implement slash proc slash sys slash def inside this POSIX sub-system. The POSIX sub-system, on the other hand, does not really contain any drivers, so it does not control any hardware, it only controls the UNIX emulation. Drivers are usually in different processes, different threads, I should say, and drivers are responsible for driving devices, so they perform IO, they also perform IQ handling. To perform IO, they use memory regions or access device registers that they obtain from the kernel. On the lowest level we have applications, I should probably say here UNIX applications, because these are spawned by POSIX sub-system, by the POSIX sub-system. These applications are not really special in the sense that they are only drawn here below the POSIX sub-system, but from the kernel's perspective they are just another thread. They are special in the sense that the POSIX sub-system does provide additional functionality for them, for example it provides some way to access the file system, some way to create pipes and so on. Let's briefly talk about resource management. For that purpose, Menagam uses a capability-based design and that's really well established. There's nothing really novel about this, so I will just quickly go over it for people who are not familiar with the concept. The idea of capability-based design is that a thread can only access a kernel resource if it has a capability that refers to the resource. These capabilities are represented by integer handles in user space, but they are only meaningful in the thread that performs the operation. If you just copy the integer ID to a different thread, this ID might refer to a different capability or to no capability at all. You can actually transfer the capability to a different thread, but then it might get a different ID in that thread. I should also point out that Menagam has no global names for resources. For example, the kernel doesn't implement a virtual file system or named IPC ports, and that's in contrast to other designs, even other microkernel designs that do have such global identifiers. I will quickly show a list of capabilities that Menagam supports. It's not an exhaustive one and it's also not very spectacular, so we'll go very quickly over it. We have IPC streams and threads that's more or less self-explanatory. We also have virtual address spaces and memory objects. Memory objects are collections of physical pages or of device memory, of device memory mapped registers, and they can support extra features such as copy on write or page cache. We also have so-called universes and these are collections of capabilities, and each thread has an associated universe that determines what capabilities the thread has access to. Again, this is a pretty standard design and not really surprising. We also have capabilities for IRQs and for virtualized CPUs to support hardware virtualization and to support device drivers. Before we get to the actual IPC, let's talk a bit about how a system calls work in Menagam. I already said that almost all the calls are async. That includes IPC but also mapping and unmapping memory because, for example, mapping memory might need to wait for a TLB shootdown. Also waiting for an IRQ usually happens in an async way in Menagam. Of course, there must be an exception and this exception is system calls that explicitly synchronize threads. These are synchronous in Menagam. These are not async. We usually use few text for that purpose. That is both used to implement new text and condition variables in user space. This is where the few text name comes from. Few text is a fast user space mutex and it's a concept that's used in many OSs. But it's also important to have some synchronous blocking primitive to block threads when there's no work to do. The point of async system calls is more or less that we never need to block threads when there's work to do. But it can happen, of course, that if that's just idle and there's nothing to do and then we want to block it because if we don't, we just waste CPU cycles. Good. So how does an async system call work? The main difference to a synchronous system call is that it does not complete immediately when the control returns to user space. The user invokes a syscall and if the syscall is async, it more or less immediately returns to user space but its work is not done yet. So we need some other mechanism, some additional mechanism to notify user space when this syscall returns. So rather when this syscall completes its actual work. For that purpose, we use a log free ring buffer for each thread more or less and the kernel posts a notification to this ring buffer whenever an async syscall is done. This design is similar to, for example, IOU ring in Linux. It has also been used extensively in hardware design. Every modern hardware device uses a ring buffer to notify the software driver that requests complete and I'm sure that it has been used in dozens of other contexts as well. One nice observation here is that retrieving notifications from this ring buffer requires no syscalls on the fast path. Because if there are already notifications in the ring buffer, then the user space part can just dequeue them from the ring buffer without using a syscall. We only need to issue a syscall when we want to block for notifications. Of course, user space needs to match completion notifications to pending syscalls and for that purpose, we just use a pointer sized value. Let's now go over Interprocess Communication IPC. We will first look at the IPC model that Managam uses and then we will see an example afterwards. The IPC model that Managam uses is kind of similar to the one used by L4. Only that it's async. What I mean by that is that IPC is only dispatched when both the sender and the receiver are ready. Until that happens, no IPC is done at all. That is in contrast to other models, for example, UNIX pipes where you can first send a bit of data and that data is buffered. We do not buffer data, but we do buffer IPC operations. That is more or less necessary to be async. But throughout the next few slides, it's always important to remember that data itself, so bytes are never cached or queued or buffered or whatever. The advantage of our async IPC is that we can handle arbitrary numbers of concurrent requests from a single thread. We've already seen that in the introduction of this talk. We also have a disadvantage, of course, and that is that queuing these IPC operations does require memory, requires memory allocations, and it requires bookkeeping. Luckily, the internal representation of these IPC operations is quite small, and it's also fixed size. There's no variable size data involved here because we never buffer message contents. For that reason, the IPC is also competitive in performance with simpler synchronous protocols, at least if we consider fast paths. Fast paths, I mean paths that do not, for example, transfer large buffers or use complex IPC operations that the simpler synchronous IPC models do not support. Let's now get to the IPC primitives. Managam uses streams as IPC primitive, and each stream has two endpoints. But remember that streams do not buffer bytes. Instead, threads can post actions to these streams, and actions are nothing else but IPC operations. And examples of such actions include sending bytes, receiving bytes, or sending and receiving capabilities. The actions on both ends of a stream are matched against each other. So that means if I post a send action to one end and receive action to the other end, these two actions are matched against each other, and then they are executed. So then the data transfer is actually done. And for that to work, the actions must be compatible. So it's okay to post a send action to one end and receive action to the other end, but it's not okay to post a send action to both ends. If that's done, then both operations will just fail. And one performance optimization here is that a single IPC syscall can submit multiple such actions. So I can, for example, send three buffers, or send a buffer and receive a buffer in a single syscall. Okay, let's now look at the actions that Managam supports. The most simple actions are called send from buffer and receive to buffer. And for these actions, the size of the data that I want to transfer must be known in advance. So I have to know a size of the buffer that I want to receive to, for example. We also want to optimize data copies. So we want to do as few copies as possible, and for this purpose, we have some additional actions. The first one is called receive inline. And this action receives data to the notification ring buffer. So that data is received as part of the completion notification, and it's bounded in size, because the ring buffer is also bounded in size. When we construct the ring buffer, we have to allocate memory of set size. And yeah, these receive inline messages cannot be too large. We also have an action to do a scatter gather IO. So this action can, for example, send from multiple buffers at the same time. And one thing to note here is that all send actions are compatible with all receive actions. So translating from the send actions to the receive actions is done by the kernel. We also have other actions that do not transfer bytes. For example, we have actions to transfer capabilities, and we have specialized actions for specific purposes. For example, to prove the identity of the sending thread to the receiving endpoint, basically. That is used, for example, to implement SignalFD, where the data that you want to receive depends on the thread that is performing the read system call, for example. Or rather, the read library function, because it's not a system call in Managam. All right. In addition to these actions, it's desirable to have some mechanism that supports multiplexing multiple concurrent requests over a single stream. For example, that's useful if you want to have multiple clients that talk to the same server. And for that purpose, we have additional actions. They're called offer and accept. And these actions create a new stream, an ancillary stream, that is usually only used for a single request response. A trick here now to improve performance is that subsequent actions can be delivered to the new ancillary stream without the need to invoke an additional Syscall. So a single Syscall can, for example, do an accept and then do a send or a receive on the new stream. And that is enough to implement a request response. We will now see an example of how IPC works. So in this example, we will have a client and a server, and the client will submit a request to the server. The server will send a response. So we start with a stream that has two endpoints as before. The first thing that happens is that the server does a Syscall to post both an accept and a receive inline action. And this receive inline action will be used to receive a request. The accept action, as I said before, creates an ancillary stream. So that's what this box here represents. And the receive inline action is posted to this ancillary stream. It's not posted to the original stream. Then the client posts a request and responds at the same time, or basically initiates a request rather. It posts an offer action, a send from buffer action, and this send from buffer will basically send the request, and a receive to buffer action that will wait for the response, or receive the response. So the offer action now matches with the accept action, and everything beyond that action goes to the ancillary stream. Now because the client posted a send from buffer action at the same time, the send from buffer action is matched to the receive inline action that was already posted by the server, and these actions here are all executed. Receive to buffer action that is also posted by the client is not executed yet, because there's no matching action on the other end of the stream. Finally, the server posts a send from buffer action to its side of the stream, to its end point of the stream, and this action is supposed to send the response. So once this action is posted to the stream, it's matched to the receive to buffer action, and both of these actions are executed. And that basically completes the request response cycle. The ancillary stream is afterwards just discarded, but that's a cheap operation because these streams do not buffer any bytes, they just buffer the requests. So in the end they are just more or less linked lists of actions. Okay, that concludes my talk. I want to thank all contributors of the project. I have some URLs here that you can check out and the repository, and I'm looking forward to your questions. Thank you. It implementing the POSIX or rather Linux APIs somewhat influences the design of the microkernel or the system as a whole? Yes, I think so. Absolutely. While implementing the POSIX subsystem, we always noticed that certain kinds of requests could not be handled efficiently in user space yet, and then we decided to add more IPC capabilities and stuff like that to the kernel to make these efficient. For example, it turned out that there is a need to identify certain threats that invoke the IPC operations. For example, because SignalFD always gives you the signals of the calling threat, which is a kind of really strange design. It doesn't matter who created the SignalFD, it basically only matters to call 3 on it and to efficiently support that use case without duplicating, for example, the stream for each threat that accesses the file descriptor. We added a request type that basically allows the POSIX subsystem to determine the caller of the IPC. That's one example. I'm sure that if I go through the syscalls and so on, I would find more examples where we did exactly that. I think the design of the kernel should always be inspired by what you actually want to do in user space, but on the other hand, we're trying to be somewhat general. There's no direct POSIX related API in the kernel. We only handed tools to user space to implement POSIX. Are there any semantic problems while implementing the Linux APIs that were interesting or something that wasn't obvious at the time? Because for my experience, you can look at the API, you can try to implement the API, but then comes around a program that, let's say, a creative way makes use of the API. There are so many semantics hidden that you cannot see immediately. That's true. We definitely had to take a look at the Linux code in some cases. I remember that E-Poll has some weird interactions, especially how it identifies files. For example, what Linux does is it uses a combination of, I think, a file descriptor number and the actual file. So basically, if you have two processes that see the different file, that different file descriptor number, then they can have different entries in an E-Poll set, which is really strange because the file descriptor number is processed local, whereas you can, of course, transfer an E-Poll file descriptor to a different process via a UNIX socket or so, and then this number is completely meaningless and there can be very weird interactions. I also remember that the graphics API that Linux uses, that we also implement, the direct rendering manager DRM, has some, actually lots of undocumented features that only really become clear if you read the kernel code. All right. I think we are at the end of your talk. There is some question left in the room, but since the talk is at its end, you'd probably take the further discussions to the chat room itself. So thanks again for your talk.
In this talk, we explore the design of Managarm's microkernel. Managarm is a pragmatic microkernel-based OS with a focus on asynchronous operations. The talk covers various aspects of the microkernel, such as its IPC model, resource management, and user space API. Managarm's microkernel employs a capability-based design to manage hardware resources. In contrast to current mainstream OSes, Managarm's system calls never block but report completion asynchronously whenever possible. This includes system calls for common tasks such as memory management or inter-process communication (IPC). A lock-free ring buffer is used to quickly deliver asynchronous completion notifications to user space. Managarm implements a POSIX subsystem to be able to run various well-known UNIX applications (e.g., a Wayland desktop) on top of the microkernel. This subsystem is implemented entirely in user space. The kernel uses various acceleration strategies to to efficiently support this use case.
10.5446/56909 (DOI)
Good afternoon. My name is Blaine Garves. I'd like to start with a reading from the book The Fiberings by Maimouden Sasashi. Timing and strategies the section. There is timing in everything. Timing and strategy cannot be mastered without a great deal of practice. Timing is important in dancing and pipe or string music, for they are in rhythm only if the timing is good. Timing and rhythm are also involved in the military arts, shooting bows and guns and riding horses. In all skills and ability, there is timing. There is also timing in the void. There is timing in the whole life of the warrior and his thriving and declining in his harmony and discord. Similarly there is timing in the way of the merchant, in the rise and fall of capital. All things entail rising and falling timing. You must be able to discern this. In strategy there are various timing considerations. From the outset you must know the applicable timing and the inapplicable timing. And from among the large and small things and the fast and slow timings, find the relevant timing. First, seeing the distance timing and the background timing. This is the main thing in strategy. It is especially important to know the background timing, otherwise your strategy will become uncertain. You win in battles with the timing in the void, born of the timing of cunning by knowing the enemy's timing and thus using a timing which the enemy does not expect. All of the five books are chiefly concerned with timing. You must train sufficiently to appreciate all of this. If you practice day and night in the above-eachy school strategy, your spirit will naturally broaden. Thus is large-scale strategy and the strategy of hand-to-hand combat propagated in the world. This is recorded for the first time in the five books of ground, water, fire, tradition, and void. This is the way for men who want to learn my strategy, not in things. Do not think dishonestly. The way is in training. Become acquainted with every art. Know the ways of all professions, distinguish between gain and loss in worldly matters. Develop intuitive judgment and understanding for everything. Perceive those things which cannot be seen. Pay attention even to trifles. And lastly, do nothing which is of no use. Today I'm going to talk about how computer science to me is a solved problem. I got other things I'd like to do in my retirement, so let's get done with computer science, make it real, and get on to having a party and enjoying the world like we're meant to be. I am the founder of the Planet Earth Society, a California social purpose corporation. My social purpose, my first series in economics, it turns out, is to maximize the human potential of every person on the planet. I have a platform, I think, that might get us started on that. Here we go. Here we go. Here we go. Here we go. Here we go. First time through the slides. The agenda today. Now I have a new platform. I call it the Do. We solve identity, for real, on Raspberry Pi's. We do quite pages among you and your friends. The easy update, if you only have to update your list of friends, it's easy. It's easy. In fact, we do even further because we do repudiation and key rolling to get rid of threats of quantum. Soon, we have unhackable hardware coming along for my friend Axel, and I'll give it overview of the hardware and software. I'll dive into tech actually and show you some code or actually an algorithm for lockless double-ended queues, which I use as the heart of my symmetric multiprocessor actor runtime. It's a dispatch hypervisor-ish kind of thing. Eventually it will go into hardware. The past, I've had an interesting past. I think I could be arguing that I saved Unix in Act 1 and Act 2. I saved Next and built it into Apple. All of Apple and Next from kernel to software layering is I'm a key architect of all of that. Billions of people use my stuff all the time. My reference counting scheme is maybe the most often pieces of software used on the planet. Act 3, interfaces. The idea of abstract interfaces being programmable gives you a blueprint for design, so you can code to the abstract and make it work on anybody's implementation. This idea, this key idea I introduced in Objective-C in 1990. We used it to retool Next and all of that. But it spun out into Java. It was a copy. We sold them their sources. I'll get to that a little bit. Act 4, I spent some time on the C Standards Committee. It's kind of big there a little bit. Let's run to how we solve Unhackable. The big picture, Unhackable to me is measurable as fitness to purpose on distributed real-time platform. It needs Unhackable identity. It needs Unhackable hardware, software. It needs data and networking as well. I don't have time in this talk to talk about that. Today let's talk about what we have. We have in our faces, hackable everything. Your hardware is hackable. You don't want to know. I don't want to know, but I do know, but you don't want to know how bad your hardware is. Unhackable identity, phishing, spamming, and insider attacks are the worst problem right now facing stuff. Obviously software is just architected out the way and is wrong and so is networking. So what's the solution? Well, obviously I think I ought to rewrite most of which I instigated a little bit. We'll hear a little bit about that. I have come to believe and I know that we need a new language embodiment to express this new technology to actors. I've worked with Carl Hewitt for years and he's a fun guy. Start out by solving identity on little Raspberry Pis. You buy a Raspberry Pi, you plug it into your router, we do a private point-to-point network tunneling everything over UDP port 1, 2, 3, 4, 5 right now. We will pick the ports. Right now we're starting there. Basically everything coming in and out of your router is digitally signed with your identity. You know your identity. You have your white pages. You set your router up and you set your do drop up to talk to the do drops of your friends. It's not hard. So we start among your bestest friends, the folks you trust in your house, the folks you call up in the middle of the night when things go bad. To come we add additional protocols for key role and validation. You know when you roll your keys to get rid of quantum attacks in the future you need to revalidate. Then we'll revalidate them face to face with your friends. We get a signing certificate that way. Hey, last validated on whatever. So we augment protocols that use digital signatures to say last validated on. This is decades ahead of anybody's thinking right now. We're doing it right now in the code. It's fun. We started out with a non-commercial local free music, etc. Sharing kind of system. I got content being created for me out of Oslo. I've got friends in a lot of places. There's no money on this network. There's no sniting. It's a social purpose network. We use it for collaborative party planning and to have fun. You listen to free music. It's just a share point for stuff you do, celebrate your local restaurants, celebrate your local neighborhood, celebrate everything local because it's much better when it's local. I make handcrafted chocolate. I won't get into that. I throw some great parts. So the content is delivered in sort of self-contained tar balls. It lands inside your land. You get access to it from your phones. You stream it to your stereo's, whatever. It's right there. It's there. It's there. It's yours. It's leasely updated. Say you had a wiki page on this thing for the party coming up. You can tack on some ideas and stuff. It eventually gets synced using my data model and gets to the right player. It's sort of a slow, slow being a couple minutes versus microseconds. But anyway, it works. We can put all kinds of content in those tar balls, including the source code of the system itself. To solve hardware, it needs hardware. I've worked with this guy Axel for five or six years now. We were introduced by the US Special Ops Joint Command folks by Lieutenant Jennifer Snow. Her explicit job was to be, she was commissioned as the second version of the Office of Strategic Services, the OSS, to reach out into industry to help them solve combat problems on the ground in Afghanistan, in Iraq. Her troops that she supported were, well, you've read about some of this stuff. It's pretty bad. I've talked to some cyber warriors who have to go in there and defeat if they can, the best that the rest of the world throws at them. This hardware was submitted as a proposal for an open source nuclear warhead tracking project. I had a software solution. They introduced us because Axel was sort of this hardware genius they couldn't figure out and I was kind of a software genius they couldn't figure out. You know each other now. Well, they introduced, we hit it off. It's awesome. He solves in his new hardware the von Neumann bottleneck. Von Neumann bottleneck says you got megabytes, terabytes of memory on the left, going over these buses into this horrible cash subsystem with undocumented cash coherency and bazillions of cores on the other end. All the heat contention goes on in the caches and it eats up all energy and all the throughput on contended database stores. I mean Oracle kind of, well I don't know what the org is giving up on, but the rest of the world is giving up too. They've gone to a very lossy, you know, sometimes consistent web-based data model. I need consistency. I need data. I need provenance data. I need a decision tree telling me why my car just drove into a rock. That's not possible with current AI technology. We'll have a little bit more to say. Not in this talk, but trust me. I talk to the people who care and they do care and I have a solution for them. The solution is to build a Pycoswitch fabric that glues all your cores and smart memory together. In smart memory system, your little compare and swap instruction gets sent out as a Pycopacket. Onto this net multi-hop where you can gang these boards together, chips together. It's very bloody fast. Axl made his name in switching. His current base switching technology got sold to NTT and for 15 years held its own. He put the smarts into his back, into his backline, into this, showing my age, into this fabric. So the contention, those two little packets coming out of two different systems, they want to contend on that. One address at Axl or three, you know, and they say, well, you know, if it still has value X, we want it to be Y. One guy says, the other guy says, oh, no, no, if it still acts out, it would be Z. Well, you put that contention resolution in the smart memory and four terabyte chips each. You can gang these together, 190 terabytes, yeah, whatever. He's got two different, this is the 64 core. The RISC-5 cores, they're bloody efficient. And he's got all kinds of secret sauce packed into his cores that I won't talk about here. He does encryption, you know, name your bit density. He does it in hardware. It's just amazing. Software is really what I think most of you folks are really going to want to know about. How do we make tame software? This is this unending problem for 50 years. What I know, having studied, I mean, I'm in dragon languages. What I know that all imperative languages, functional languages, logical languages boil down to actors. I didn't figure this out. This is Carl Hewitt. I've learned my PhD level stuff from him. That's good. All these languages boil down to actors. My view is why not just program in actors directly? We'll get there. The big idea is, let's say you had, say, compilers. Compilers have four stages, the Alexa, or parser, or code generator, and assembler. But of course, these guys don't interoperate. Well, that would make it too easy, right? Well, let's make it easy. How about? C functions are fraught with side effects, undefined behaviors. They've got global scattered all through them. You kind of, how hard time are you using C functions? I built libraries, all kinds of libraries, and you have to code it a little bit differently in libraries. You just don't call it malloc. You have to have an indirect way to get to malloc so that you can use different malloc systems. Okay. Actor functions in my HectorScript language are lambdas with actor runtime side effects. So, actor runtime side effect is to send a message. There's only three things you do to actors. Actors get to, on receipt of a message, change their state. They get to send a limited number of messages out there, and, well, they can bail. They can quit. That's about the three things that actors do. It's a super simple formulation. Particle physics actually can be rewritten in terms of actors. So in this example, we're going to send the result of the A plus B operation. I've got a sort of a little immutable object subsystem in there, to the printlin actor. And so, you know, it's a pretty simple function. You know, add two numbers and send them to printlin. Okay. Well, it turns out that printlin obviously is an implicit parameter. So if you pull out the implicit parameters in your code, make them part of the interface, then you have fully contain the specification, at least the invocation specification for this function. I did this NSN invocation thing back in 1990. Pretty good. Used it for the undo manager, just read objects all over the place. That's how the spell checker worked for years. It was just read objects, a little subsystem. The key idea, though, is that printlin is actually an actor. Actors don't have return value. It's really hard to think about. It's taken me 10 years. I'm getting there. But the types of messages that go into this, what the set notation is supposed to mean, there's supposed to be one or more, many kinds of messages, but, you know, I'm going a little fast and loose here. But basically, you can type the messages going into printlin. You can know that printlin might actually have hardware impact and is not idempotent. Yeah, there's stuff to go there. But fundamentally, fully prioritized, these parametric signatures form the key to all the possible implementations. So let's go back to the other example. Let's say that, for example, we rewrote the parser to be in modules. So the lexer takes a char stream, a token pattern like we used to look up, and it spits out a token stream. Right? So the parser takes an input token stream, you know, the grammar for the actual language, and the non-terminals, and the start symbol and stuff, and produces a parse tree, an abstract syntax tree parse of what's going on. That in turn, and the code generator, you know, give it some kind of background architecture, or you could be AST coming in, you spit out a low-level intermediate representation based on the architecture. I recruited Chris Lautner in the LLVM technology in the Apple at one point, and he went to my alma mater next to my high school. It was kind of fun. But the assembler comes in and turns the low-level IR into actual machine code. Now, I've seen this done in a PIG grammar, partial evaluation grammar. Time for parsing to code generation is not actually hard, but it needs to be rigorous. It needs to be provable. I know people who do that kind of stuff. They bring in their stuff to the C committee, the cherry folks, for example. Software. So let me tell you about, let me back up just a bit. Oops, I can't back up. So I claim that we turn software, you know, that little two-column thing of parsers is a general concept of a periodic table of algorithms, such that any line item in a row is a valid implementation, correct implementation. And as you go down this, in this case, a two-dimensional table, a random walk down through this table produces a correct result. Any module can be replicated and run against itself or others, you know, and joined at the other end. It's a perfect test configuration. It is the ideal test configuration. So imagine a continuous development, deployment, and test configuration algorithm running on the network, on its extended network. So I claim that we can measure these algorithms for energy use as they operate, for energy as a combination of, of course, CPU, memory, and network bandwidth kinds of stuff. But you move the most efficient, you know, better algorithms are a great idea. You move them to the left of that table, for example, and this essentially becomes an evolving least energy use platform. Every module, of course, is signed, digitally signed. Who wrote it? And they own their rights in my system. They own the rights to use it. I ask that they license it to me for commercial non-commercial use. If they want, if they want, if they want to make some money on this stuff, well, then that's an extra layer because once you put money in there, then the government needs to see every day of the thing you do. Excuse my language. And it's a valid concern. But it's different than what you need to do to do party sharing. And so there's very strict separation between things you do on my network for money and things you do for your friends. I claim this would allow, say, refugee camp folks to be coders. Gives them an economic platform. I've talked about this for years. I'm going to do this. Let's talk some tech. Go try finding a lockless, double-ended queue. I did. I scratched my head for a long time. I've done a few fundamental things of interest, like the Objective C hash table. That's a good one. Absolutely synchronization-free multiretor locked writer. How did I do it? If you know how to, if you... I did that one on 1999. It's still a good algorithm. It's the cornerstone of all Apple products, honestly. That dispatcher is still in use. The SKU is my new great invention. It's a lockless, double-ended queue. And it's dirt simple. You take a 16-byte entity, assume 8-byte pointers for this, and you take an 8-byte, a 16-byte Align block of code that's named, thing in blue. You make use of compare exchange 16 bytes, which is available on all hardware. Of course, the ARM uses a G5, uses a lock, a slightly different sequence to do it, but there's a logically compare exchange sequence on that. My team got tagged with improving integer spec marks, and we improved them by 7% or 8% in one little run at Apple and that enabled Steve to put TV ads. Super computer on a desktop. Kabooie! The wall blows up. Well, my team did that. Actually, I had little tricks. I'm a hacker. We hacked. I told them, run the spec marks on a single user mode. That gave us 5% on. My team guy says, oh, single threaded allocator will do these small allocations a lot faster. Get rid of the locks. We hacked spec marks, but that's what everybody does. So don't believe spec marks. Don't believe benchmarks. Believe real running system. Anyway, so the way you got to do is you need to, you have an ABA marker so that you can detect collisions that just happen to come in with the same values while one's waiting to stuff something in, something else might have changed underneath itself. You need an ABA marker. You use the unused high bits of addresses. There's at least eight actually more in practical systems, but you at least get eight. Most pointers that I care about are aligned to 16 byte addresses, so you got four bits at the bottom. You squeeze those together into a 24 bit counter and you say, you always squeeze and pack your pointer, bump the rough count on every operation. You treat the queue as an incoming stack in the first slot and an outgoing stack on the second slot. So work comes in, simple push down stack using this ABA push operation though, a two word push. You're only updating one of them. You push work in on the top. When it's time to get work out, well, you chop it off the stack that's in the bottom. Well, if there's no stack on the bottom, well, that's the obvious thing. You swap the pointers. You now create, you move stack one down to stack two's position, resetting stack one to be a zero, you know, be an empty queue. And you keep going. It's all dynamic. You push and pull all at the same time. And so using this sort of ABA unpack and pack, compare and swap mark, bump the rough counter, you solve a double ended queue in a 16 byte data structure. The order comes out not exactly the same as going in, but generally if you're going to have a dozen cores going against this, the ordering is not really an issue. And in fact, with actors, the ordering is never guaranteed. So this is an absolutely perfect for actor dispatch model. So in an actor runtime, on the left, we build an actor that is basically, it's got a header. It's got pointer for behavior and pointer for stay, and I have a skew slot hole in mind so that the actor itself is the data element carrying the chain going. And you just need this little 16 byte header, the runlet, the middle guy, actually on the far right, you're going to have several skews. You know, one for IO say, oh, IO work comes onto this queue, including from an interrupt. This is, I think, the brilliant part. You can take an interrupt and queue. You've got a pre-allocated little message guy. You got to help me out here. You're slamming with what the interrupt is telling you. And you just queue it onto the actual run queue that cores are already chomping at the bid ad. In an unstoppable, lock free mantle, I think this lets you restructure kernels in the way they ought to be. Some of this stuff's being done in the hardware. We could do this in software until we get there. You get an IO work queue skew, you got database, you got crypto. You got all kinds of different workloads. You can model, you know, those work queues. And you let, and each core will have a run list of skews that they're supposed to work on. You can share this run list among cores. You can have one per core if you want. So this is sort of a scheduling tree. There's simple ways to extend this, see go way beyond M to N kind of ordering algorithms. And you just let all the cores have at it, or have at the work they're supposed to go. So what a, the basic core algorithm, what a core does is it basically says, well, I got my run list, so let's grab a queue, a skew off the run list. And the skew, of course, has a bunch of actor work scheduled on it, as I said, chained on it. It turns out each actor can have all the messages that are destined for that actor on its own chain. I call it a tail queue. That algorithm is published already. You may not even need a tail queue. A simple stack might work there as well. But anyway, so you can, when a core gets to an actor that's got several messages, actually chomp them all up at once, so that there's all kinds of interesting optimizations there. This is still a paper design. I use a single threaded art. I call it start. It's dirt simple. It compiles and runs on ESP32 with half a meg of memory. This thing runs everywhere. Oh, yeah, I have the algorithm up there twice. Sorry. Oh, well, I think you could be idea. So this, I think, is a fundamental data structure that people know. I don't know whether anybody knows about it. I came up with it in 2012. I had to figure out this compare exchange instruction on Intel. I hate Intel. And it didn't work. It just didn't work. And eventually I found out that the assembler was assembling my instruction wrong. So I had to go down to the bits and hand assemble the compare exchange instruction to get this thing off the ground. That was fun. I like to talk about the past. Because obviously this is an audacious claim, solve all hardware and software problems. Well, I don't know. You just got to listen a little bit. I could argue an Act 1 I saved UNIX by accident. I got a slide for that. In Act 2 I kind of went and saved NEXT. I mean, NEXT was this desk spiral nobody can look away from. Eventually, well, we sold the sources. That became Java. They stripped out some stuff. But basically we ended up with an S3, sorry, an IPO document. Basically we had enough revenue based on being a web server, web objects, Java based web objects running on an objective scene, a substrate underneath using my Java bridge technology. They had enough sales to go public. We were an application on Windows. We didn't have our hardware anymore. We didn't even have our operating system. But we had our application layer, which is a lot of my work. And we were going to go public with that. But, you know, this little battle with BOS. I know a lot about that stuff. By the way, we saved NEXT. Apple bought us. I went on to Apple and did a huge amount of things there. 10.6 Snow Leper was a performance release based a lot on my closures. My closures at Apple were called blocks with that little carrot symbol on them. They rewrote half their APIs in that release alone and you can't program without closures. Closures are pretty cool. They start out on the stack. If you copy them, they get moved to the heap. All the right pointers are patched up. It's a really cool technology. It lets you say with a network stuff, you pitch in a little bit of code and it gets saved and whenever some network traffic comes in, for example, you get to call your code. And it remembers your local variables in the right way. This closure idea worked for C++. It worked for Objective C. It worked for the hybrid language Objective C++. And it actually worked for C. I took this idea to the C committee and they actually gave me permission to submit it for the next rev. But honestly, it needs memory management underneath and C is too bad. I know how to do that. I did that at Apple. There's some talks. They're referenced here. I know how to do that. You put some keywords in to mark track pointers more or less and you use the type system to track, to cause the compiler to issue what's known as memory barriers. So a little bit of C code, a little bit of code to patch in every time you do an assignment. We got plenty of horsepower for that to happen. Scott Forstall required me to impose it with less than 1% penalty so Safari wouldn't slow down while we met that criteria. It was in the noise. Use the branch and link absolute instruction on PowerPCs. To transfer up into the high memory, 8K of memory up there to play with one page or one page memory, 4K of memory. And we'd swap that page in whether it's a garbage collected app or not. And so code was the same but we'd swap in at runtime to patch as to whether you got one off the garbage collector. That was fun. I got a patent application going for that one. I don't think that went through. In Act 4, I kind of went and took closures supposedly to the C committee, like I said, but in fact they were in the middle of bringing in atomics and they didn't have a syntax for it. So I put a syntax in place for that. And I stayed on and did bug fixes and stuff for C and the other languages. Act 5 is this thing I'm talking about now. A little bit more on tech. In Act 1, I'll even have another slide on that one. What's the timing here? Yeah, we're doing good on time. In Act 1, I accidentally saved Unix. I'll have a whole slide on that. But things like Solaris, Spring, my own micro kernel physics, that stuff led to POSIX, the open-mobject management, the ball came out of this fun little thing I did with Bill Joy. I went to NACS. That was a kernel architect, language layer. I put Objective C in the kernel to help us with our Intel transition so we could subclass drivers instead of have to write them all up. Ethernet drivers were almost all the same. And stuff, things like distributed objects, that was what I was known for. I got interesting patents on distributed memory management for that. This thing called portable distributed objects saved our bacon. We put the runtime part on Solaris hardware and we kept the GUI over on Nextstations that enabled us to put our database layer on the database servers and all the UI visualization on Next. That broke up in Wall Street, made a ton of money for the company. I dabbled in electrical curve cryptography with Richard Crandall and did some fun stuff there. The whole framework architecture that turned into Java's package is more or less. I helped contribute to that. I was essentially the owner of the NS object. I mean, all the methods on the NS object that everything derives from, that was my work. I turned it from a malloc free model into a rough counted with sort of lazy man's nursery with the auto release mechanism. I added a garbage collection to the language. I added the finalized method. The last thing under Art, automatic reference accounting. There's a little weak subsystem interaction you have to do because if you have weak zeroing, weak references, which I put in the language, you got to check just before you let it go, ah, the weak system revived it. So there's a little call out to the weak subsystem to make sure that your pointer isn't being revived. I recruited, as I said, Chris Lautner, bringing in LLVM, my friend Steve Neroff, and Chris went off on a weekend vendor and came up with Clang. That was kind of fun. I stayed out of that part, but they had a lot of fun with that. Act three, we sold our sources to Sun. And after they took out the dynamism and a lot of that showed up in Java. So much a one-to-one map that I was able to build this thing called a Java bridge so we could have Java on the top and Objective C on the bottom. You could subclass. They interacted. A hybrid object model. There's some interesting stuff on Garbage Collection that came out of that. Java became.NET, you know, and kind of changed the world because suddenly hardware vendors couldn't lock in their clients to their custom software. I think that's big. Act four kind of is a hobby in many ways. Keep my mental gears sharp. I went and did some work on the C standards community. I introduced Atomic because they are swallowing in this memory model from the C++ guys without a syntax. I put syntax in there with Tom Plum's assistance. And there's a paper in the addendum in 1525 where Paul McKinney and I talk about how to use these six types of weak memory with these samples. I argued at the time it was impossible for programmers to even use. Well, it is. So anyway, I stayed on and did the sort of bug fix wrangler for all the documents that the GWG 14 was important for. So I got tired, I resigned kind of, I was a volunteer through all of this of course. It's a volunteer organization, but I gave up because, well, for many reasons, C is no longer viable as a technology in my opinion. So let's do something better. How I saved units without even trying. So I graduated out of a very special high school. I was, Play-Doh was a system I used. I learned the Cyrillic alphabet. I had, as a 15-year-old, I had land parties. I'd sit on my little Play-Doh terminal and do dog fight with somebody in the same room on another one. I had a $15,000 item, so I had a car cost $3,000. But I had a land party as a kid. Three years later, I have my master's in programming languages and systems in hand. I got started on something called the Bell Data Network. That would have been the internet. It failed. It took a long time to fail. As it failed, I transferred up into the Unix System 5 group. Unix System 5 was 14-character file names. Triple, you know, classic, original, single file system, single CPU. I mean, basically, it was Dennis and Ken and friends that coded up. That was half of the market was using that basic technology path. The other half, Murray Hill Research had set all of the researchers out for a summer at all these different universities. Greg Chesson came to the University of Illinois when I was there. I think Ken Thompson went to Berkeley. They spread their researchers to the various universities with mag tapes, and Unix was seated. It was out the door. The barn door was wide open. But the marketing guys, when they figured out that Unix was big, did all kinds of things to try to bring it back in, that's the registered trademark stuff. There's a book out of Australia talked about how, you know, the source code of the Unix. They banned it. They had it banned. You couldn't figure out how to, I mean, it was, Unix, oh my gosh, it was a mess. And they were losing. Mark was fragmented. There was tons of competition out there. DEC, VMS, Tandem, nonstop computing, Apollo, Pyramid were experimenting with, you know, Gene and 32-byte, 32-board computers, Data General, Novell, I mean, there's a ton of market out there, tons of companies. But so I get there, I go, where's the tech coming from? Because the Murray Hill guys had gone off to us, Plan 9. They weren't innovating. I picked up and finished the integration of streams, the OSI, sort of TCP IP layer, you know, a pie in the sky kind of thing. We did it, we shipped it, nobody used it. Now, Maury Bach wrote the definitive book on Sys2V. He worked for me. I taught him how to play Go. Go is an interesting game. So I get there, and pretty soon I'm managing the core Unix group up there. And so I'm integrating all this other stuff they're doing. They have a technology called RFS, Remote File Systems. It's stateful. RFS was eating our lunch, I talked to the right people, it suddenly made some kind of agreement, made integrated. That implemented some of the System 5 memory segment kinds of stuff. And ATT said, well, we'll integrate some of your stuff, except nobody was working on it. Well, I go, well, we're not going to get any new stuff out of Murray Hill. Where are we going to get this stuff? We're going to get whatever is toasted by the competition. Unix is going to die. And so I negotiated to bring in VFS in place of our file system switch. We got NFS and simulants out of that. I swear I did this in a 3.1 timeframe. Maybe it was early release 4, but I thought I did all that for release 3. I remember getting slash proc in as a bug fix because how do you debug multi-processor systems? The sys calls for debugging are single process oriented. And so on that flimsy excuse as a bug fix, I got slash proc. That was fun. However, at some point, this marketing guy comes into my office, closes the door, he puts an NDA in front of me and says, you should probably sign this. I signed it. A couple of days later, I'm on a plane out to California. I get in a locked room with Bill Joy and the goal is let's merge in the system calls from Sunos. And so we did. We went through the system calls. I remember arguing and saying, no, what's this F-shitter thing? I mean, who wants to change directory to a file descriptor of a directory? And he goes, well, I'm not sure. I think our source control system guys use it. I go, well, we don't have to have it in the merge thing, right? Yeah, I guess not. Well, anyway, on streams, I said, we have streams because, well, we got the BSD so we got BSD sockets. I go, well, you know, but there's an emulation layer of sockets on stream. So he goes, yeah, we'll figure it out. That's fine. I he didn't argue. And I figured out only later why they didn't tell me there was a hardware deal. AT&T, here's the press release, bought 20% of Sun non voting shares. They got exclusive access to the Spark for work stations. They were ditching Western Electric. Sun was going to keep using Spark for servers. There's a phase one of the deal. And everybody was going to end up paying Unix license fees because of this. This is a huge win for AT&T. They were going to be in the computer business big with Spark work stations. Everybody paying them fees. They loved it on paper. This thing did. That's how Sunos turned into Solaris. Two days of Bill Joy in a locker room. This is probably the bigger disaster than Osborn's portable computer where he had a very viable product. He announced, oh, about eight, nine months, we're going to have a much better one. Sales dried up. Company died. AT&T thought this was the win of a century instead the industry worked. They formed, even some of our players formed this group called the Hamilton Group in opposition. And they went off and they came up with some generalized Unix kind of system thing. They became known as the object management group. A Sun joined it, pushed some of my interface ideas towards them. They pitched them out as Corva. But meanwhile, phase three of this deal was Bell Labs, three of us from Bell Labs, three folks from Sun, they hired Mike Powell and James Mitchell and Bill Joy would show up. Sometimes Rob Gingrich and a few other guys from the SunOS team would show up. We met a few times. Mike Powell came up with the name Spring. We didn't know what we were going to use. We didn't know what language we were going to use. All of our research was next door. They had Modular 3 there. Modular 3 actually has something like interfaces in it. I don't remember whether I even knew it at the time, but it was an interesting, it was an interesting little place. We actually had a sign on the doorbell park. I got videos from it. It was an amazing year or so. But they ditched me. They said, if you want to work for Bell Labs, you've got to come back to New Jersey. While there, I came up with this physics nano kernel after they split off. It was a registered transfer micro kernel. They'll claim, he just told us about this idea. I believe it came from David Charity's vSystem work. That was a cool idea. I built a whole micro kernel on it. I've got a slide for that. Do I still have time? What's my time? What's my time? What's my time? Yeah, I'm wrapping up. Okay. So AT&T ditches. We donate this spec, this combined UNIX spec to a user group standards. This turns into POSIX. And license free UNIX now could clean room implementations. They added Linux stuff to it as well. It became possible, standardized. That's where license free UNIX everywhere comes from. It started with Bill Joy in the locker room for two days. Physics was pretty cool. It's just address based domains, wandering threads, like I said, a lot of vSystem from David Charity. Threads carried CPU limits. That was my idea. It was basically IPC and registers or capability channels as the only primitive. Everything else builds on top of it. We transfer capabilities across messages like in Mock. I mean, this is classic actor stuff. I didn't know actors at the time. Now the performance, I loved it. I could get across address spaces and 10 times the C++ Vtable Dispatch. This was enough to build multi kernels, folks. It was. I convinced that to Avi, he hired me. And well, anyway. This up calls to the domain is actually an actor runtime is what I now know. I didn't know how to do it at the time I do now. That's pretty much the end. Here's some Hector script. This code actor messages are delivered in an arbitrary order. And so what you will see is that's all folks show up and then two seconds later, 2000 millisecond delay. A pause would show up. I'm going to teach Hector script on the do and I'm going to do it with people mostly as a volunteer organization to get started. I've got industrial interest in this. Believe me. Huh. Yeah, it's good. And but we're going to do it right. We're going to get the bugs out of the white pages protocol while the hardware is gets finished and taped out and produced. So I got a couple of years to get the stuff working on pies before I could upgrade it into unhackable chips. It's pretty cool. Yeah, I'm pretty much that said, if you read the PDF, you'll see some references to various stuff, but I don't want to talk about it. That's it. Thank you. How do I stop this recording? Oh, wait a minute. I do have to show you one thing. Oops. Wait a minute. This is the wrong slide deck. Maybe I'll have to start over. Oh, wait a minute. And here he is. Microphone. Let's try that again. Voila. So very interesting. Interesting. So of course, here the microchromes. So our question would be what do you think of microchromes and current operating systems? I'm sorry, I've turned to read your questions again. Say that again. What do I think about which? Microchromes and current operating systems. Well, they're all corrupt. Yeah. I mean, there's been bugs in hypervisors, right? And so you got to get the bugs out of the hypervisors first. But operating systems try to present a user model. Effectively. Why does my Unix system have multiple users when it's clearly a device that only I use? You know? Why do we still have signals? Why do we still have this idea that you can take a signal in a multi-threaded process when you really can't unless you have a lockless allocator? Because you start, you know, so the C standard, for example, got rid of, says you just don't use signals. They don't know how to get rid of things. So our operating systems on a networking are built on abstractions with architectural flaws. And so the only way to fix them is to replace them. So I would guess you also an opponent. I think POSIX, anytime you standardize interfaces like they did with POSIX, says you can't grow them. When I got into the system five group, I go, oh, well, you know, are you IDs or these, I can't remember. I think they were 32, I think even 16 bit at the time. I said, well, you know, we got to grow them. They go, oh, no, we can't do that because of ABI. So whenever you lock things down in an ABI situation, you can't get rid of it. On the C standards committee, for example, you return like one of the timing things comes back in a 32 byte value, but it's actually tracking the clock. It overruns within a day. Okay. And they don't know how to fix that. So they're introducing new APIs, which I told them to do, but they can't fix existing APIs. So ABI binary compatibility is a marketing idea to keep Intel profitable, more or less. So what I wanted to ask is, this do is some kind of net, right? Yes. It's a platform. Okay. It's a platform. Your program on it, it stores your data, it interacts with you. It's your new operating system data model geared towards social purpose. Yes, I got it. And it's like for local use, for your friends. So it starts out on, in these Raspberry Pis, it runs on my Mac, it runs everywhere, but it starts out in deployment on these little do drops. You plug a do drop into your router, and then it is a private network that only talks to other do drops. But inside, because it's inside, you plug it into your router, you get full access to it from any device you have now. So it's a captive Linux client. So we can run whatever we want on that Linux client. It's not connected to the rest of the internet. So it's not subject to all the other issues, like SSH. SSH's protocols have been broken several times, so we don't use SSH. It's not hard to do that. I've got remote, I got remote modeling going on right now. Because these are your friends, they run the same software you run, and you trust them to not hack you. I think that's a better way to build a model of the world. It's among friends who you know, because they digitally sign everything that comes into your machine, where they came from. You take anonymity out of the internet. The internet has anonymity. It's a bad idea. You always know who sent you the data. It's the same code you use. You share a better code. And we build a better world that way. Yeah, it sounds like a miniature version of the internet. It's a little bit more like a distributed small talk, for example. You know, it's very programmable. I can send code over the wire, I send data over the wire, and I can show you, you know, I can show you the layer where I add signatures and stuff is an actor. I mean, you know, you get a little message in, and you need to sign it, so you stick it in an envelope, attach the signature, or the hash of the signature, and you push it on down the wire. It's a filter. Okay, okay. Yeah. 25 seconds. What's for the future? So I'm still recruiting, I'm recruiting folks to get inside the kernel, inside this little nut house, and tons of people to do the social layer. What's a good idea? Okay, thank you very much for this presentation, and good luck. Thank you.
As tech lead on commercial UNIX at Bell Labs, an opportunity arose in 1988 to write a nano-kernel to end all nano-kernels, complete with an unhackable boot requirement. It was clear that a cryptographically secure chip level boot assistance was required, which guided subsequent patented ECC work at NeXT and Apple. Post Apple, in a “Social Purpose” company of his own, work has continued to fully realize this dream. The 1988 nanokernel had no threads and delivered messages across upcall channels to a thread simulation, yet that was unsatisfactory. The social element of phishing etc. is now the most feared secuirity breach, and in new work, discussed, the complete solution space is described, in the first half. New hardware is underway, solving such issues as weak memory models. In the second half of the talk, key lockless queuing primitives are discussed that form the basis fo a multi-core actor runtime (MART) to subsume most if not all duties of the executive. Far richer than a hypervisor, the executive manages memory in new manners, in a memory safe programmer nature. In practice, a single core actor runtime (START), running across 32, 64 arm, x86, and extensa cpu architectures is available. The language and runtime are destined to the Open Source world, unless the larger project, TheDew, makes file systems and databases obsolete in its first rollout, which will include unhackable identity. Ask: Join us!
10.5446/56911 (DOI)
Hello, this is Gabriel Palmer and today I'm going to talk about the Composite Component Based Operating System, which is an OS that we've been working on for about 15 years. I'm going to be talking about a lot of research that we've done at GW, and I've done a lot of this work with the amazing researchers listed here. A lot of our work on composite has been motivated by the idea that there's a convergence in requirements between systems that have historically been quite disparate. So we have embedded systems that have focused almost entirely on things like predictability, simplicity, size, weight, and power and cost, and all of these types of factors. But then on the complete other end of the spectrum, we have these massive data centers that have multi-tenant clouds, and these focus inherently on performance. We want to push as many client requests through them as possible, things like isolation, especially multi-tenant isolation, and elasticity, the ability to scale up and down. And what we've been noticing is that there's been an interesting convergence between these two things in that we have the predictability requirements of embedded systems being ported over to the cloud. People care mostly now about things like tail latency. We also have the performance constraints of the cloud percolating down to embedded systems as we start to consolidate more and more things onto our small systems, right? Autonomous vehicles are a very good example of this, and we care increasingly about isolation and security on embedded systems, in some cases, full on multi-tenant isolation. Now, the problem with this is that when we start having all three of these things in the same system, belay in embedded systems or belay in the cloud, we have a very difficult system design optimization. When we're implementing our operating system, we need to say, I need strong isolation, but I also want performance. But these are contradictory to each other in many cases because isolation requires protection domains, performance requires efficiency, right? It requires not switching between protection domains frequently. Predictability and performance are often at odds simply because when we want predictable performance, we want to be able to put bounds on how long it takes to respond to different impulses. But to do so often requires that we make optimizations that make performance much more difficult. Isolation and predictability are often frequently at odds as well for similar reasons to why isolation and performance are not often seen in the same system. So we have this really interesting type of a design space, and a lot of what we've been doing is researching how we can effectively design an operating system that works very well for each of these domains and each of these different optimizations, but can also satisfy the requirements of all of them. So as you can see here, we're essentially trying to provide a foundation that provides performance isolation and predictability for a number of interesting domains. So how do we do this? From the very start, Composite has been focused on the idea of being a component-based operating system. And the idea is that it is very difficult to implement one version of functionality that will meet all of these requirements. So what we really want instead is the ability to customize the functionality of the system for the requirements of the overall system and its goals. So what is a component? I think that most people who are at this conference understand what it is, but it's also one of those things that means different things to different people. So I just want to be explicit here. When we think about components, we think about some code and the associated data executing at user level, and we call this the functionality. That functionality is typically an implementation of very specific APIs, where each of these APIs, like it's exported for other components to use, consists of a number of functions that have some sort of a, you know, supposed contract, some sort of a required functionality provided by the component itself. And then, of course, that component cannot necessarily implement everything itself that it needs from the system. So as a number of explicit dependencies on APIs from, that should be provided by other components. So when we look at components in this way, they are a unit of reuse in the system, and because they're implemented at user level, and in our system in separate protection domains, they're also a unit of isolation. Now, a goal that we typically have in these systems is to minimize the functionality provided within each of these components that's necessary for implementing the APIs that it exports. This is simply another way of saying that when you're implementing your components, you want to implement them in accordance with the principle of least privilege, right? You only want them to require the implementation that's necessarily, minimally necessary for what they're trying to implement. And this minimizes the number of dependencies that they have on other parts of the system. And if we do this, you actually end up with a fair number of components in the system, and this provides stronger fine-grade isolation than you see in a lot of different systems. One of the benefits of this is that we can start thinking about the implementation of systems as just the composition of components together. So here you see a system where we have a number of system services, each exporting in this very trivial example, a interface. Some of them require other interfaces from other components. We have applications at the top that rely on some functionality, et cetera. So we have composed a system out of components. So design has moved to lat level, right? And now the scope of things like security compromises, faults within some of the service components, and the unpredictability within the system is limited by design. So for instance, when we think about app one and app three, it might be the case that app three is relatively trusted and relatively simple. All that it's doing is talking to some device over I2C, using a channel to pass some data, maybe to app one that talks over the network, right? The app one might be relatively complicated, app three maybe not so much. So we'd really like the facets of the system that are required by app three to hopefully be somewhat isolated from the potential compromises or faults of the app one. And we can see that we can simply look at the intersection of the two sets of dependencies for these applications to see which ones need to be in the trusted computing base. So I think this is probably kind of second hand to a lot of people here, relatively obvious, but I think it's worth saying that this ability to design the system out of components is in some sense a little bit of a superpower. The problem is how do you actually design a system in which you can define all of the policies that are important for these goals of performance, predictability, etc. in components at user level, right? This becomes very difficult and it actually harkens back to what Leica gave as advice for microkernel design in 95. And that is that a concept can be tolerated inside of the microkernel only if moving it outside of the kernel, i.e. permitting competing implementations, for example, different components would prevent the system, the implementation of the system's required functionality, right? We should only put something in the kernel if it has to be there to implement the functionality of the system, right? So we want to be able to implement all of these things in components that begs the question what needs to be in the kernel. And most microkernels try to adhere to this type of advice, but they all kind of need to bake tradeoffs in different places. Composite in some sense is a research lack of the system. And we can see research lab that allows us to really try to push this far. So composite tries to push this really extreme in the sense that we want to have component defined, user level defined, things like scheduling, whereby there is no scheduler within the kernel. All that is defined in some sort of a user level component. The parallel scalability properties of the system should also be defined and if limited, they should be delimited by implementations in user level, not by the implementation of the kernel itself. Similar concurrency, the second that you define blocking APIs within the kernel, you're defining concurrency, synchronization, all of these types of policies within the system that in our opinion should be at user level or at least composite tries to put them there and see how far we can push it. We also do very strange things like move capability delegation, revocation policies to user level so we can export them out of the kernel and redefine them as appropriate. So there are a lot of different ways in which composite tries to push the bar in different ways. To understand how we do that, you need to kind of dive into what are the abstractions provided by composite. So I'm going to go through these relatively quickly. I think a lot of these will, I don't know, feel very comfortable to people who know SEL4. They're going to sound very similar in how I initially present them and I'll go into the differences later. So we, what does the kernel provide? If the kernel is meant to be minimal and we're supposed to be able to implement in components most of the required policies and functionality, what does that mean? It means that we have capability nodes, we have page table nodes, all of these are directly accessible resources from user level implemented in the kernel, and we can piece these together to create protection domains in the system. And if we combine a capability table with a page table, all of a sudden we have a component. So a component is a protection domain that includes all the memory resources indexed by the page tables and all of the kernel objects referenced by the capability tables that define the access rights for anything executing within that component. What executes in a component? Of course threads, right? We don't do anything to buck the historical trends since the 70s of defining threads as our abstraction for execution. What's a thread? Of course it's a set of registers that execute in a component. We'll see that the notion of threads gets relatively complicated later. We also, of course, when we're implementing a system like ListNeed to understand how memory allocation works. We do not have memory allocation within the kernel. Instead we looked at what SEL4 did with the untight memory and said that that is a fantastic, amazing idea and just kind of ported it over. We make some significant changes to the details, but the high-level idea is the same. You can have access to untight memory. Components can have access to untight memory. They can not actually do loads and stores to that memory, but it is memory that they can retype into either kernel objects or into user-level virtual address space pages. If we're retyping, for instance, here into a thread and into a page table node, you can see that those are now referenced from within the capability table because, hey, now they are actually kernel objects that we program. As I said, we can also retype some of this memory into virtual memory and map it up into components and therefore it's actually present and accessible to loads and stores for those components. The retyping facilities of the kernel seek to keep the kernel safe and to keep components safe by simply providing the guarantee that no piece of memory can be accessed as multiple types at the same time. Any piece of memory is accessed by as one type and only one type within the system. And there are complicated rules for generating that, but that's how it maintains safety. We also have things like IPC in the system, of course. We have multiple components and we want to talk between them, coordinate between them, as I talked about. We want to compose them into these component hierarchies within the system. And we support both synchronous and asynchronous IPC. I'm going to go into both of these in detail because these are kind of the core of what makes composite able to do things like user level scheduling. Before I go into that, we need to understand a little bit about how traditionally you think about IPC within micro kernels or within operating systems in general. Option one is we use things like asynchronous IPC or bounded asynchronous IPC and things like pipes, right? And that's just the idea that we could have multiple threads and they can coordinate with each other. They might have different priorities, in this case x, y, and z for different threads. And if we want to kind of pass messages between them and we want to essentially assess how long it takes for a message to percolate through processing throughout the system from the thread on the left to the thread on the right, we need to do some sort of an end-to-end timing analysis that understands the dependencies throughout the system. Especially when these threads are located across various cores, this can actually lead to a fair amount of pessimism. There's a lot of active research in trying to make this better and faster. We've also noticed that over the years, most micro kernel IPC ends up being synchronous and that ends up being, for good reasons, it's a lot easier to coordinate in that way in some cases and it can be a lot faster. So it's not that it's synchronous IPC is inherently better, but a lot of micro kernels have gone in that direction for a lot of functionality. And what does that look like? Well, we typically do synchronous IPC between different threads. So again, we have these three threads in the system. And now instead of sending asynchronous messages, we rendezvous on some sort of an endpoint between threads. So priority, the thread in the center might await a call from the thread on the left. Once the thread on the left makes a call to say, yes, I need your service, I want to effectively invoke some sort of a function within you, then they rendezvous, it wakes up the middle thread, etc, etc, a blocking thread on the left. This actually bakes a lot of the concurrency logic with the into the kernel and the synchronization logic within the kernel. It adds a lot of assumptions for things like ordering for things like priority into the system. So it's hard to remove policies from the system when you're doing this. If you want a predictable system, it also requires that you do things like priority inheritance, do considerations around priority ceiling, manage budgets, these are all tractable problems, but they require a lot of engineering and they all inherently bake policy within to the kernel in some sense. So what's interesting is we looked at lists and we started to see that if you look at history, we see on the far left list notion of synchronous rendezvous between threads as being the primary and only way to coordinate within the system. And this was back in 93, 95, a lot of Leica's original L4. And since then, we've been moving more and more to the right towards this notion of what's called thread migration. And this is simply the idea that you want some notion of that execution that started on the client on the left to go with the IPC to go with the invocations as it percolates through the components. So Fiasco has credit support, Nova has priority inheritance support, these allow some notion of a notion of the thread the execution context to go along with the IPC within the system. Cell 4 with MCS support doesn't allow priority to go throughout the system, but it does allow budget to go along with invocations. So a lot of these types of systems are moving over towards thread migration. Composite actually represents a system that goes all in on thread migration and always had. Thread migration is simply the idea that I might have multiple components in the system. And if I have a thread on the far left and it wants to make this invocation to the middle component and then to the component on the right, it actually ends up logically flowing between them. So as it makes function calls, the executable thread continues execution into the middle and then the right component. And then of course, when the right component finishes its execution, it returns to the middle, the middle returns to the left, etc. Now, of course, done naively, this is very dangerous, right? We can't have kind of thread context just accessible across all of the components. We still want to maintain isolation between them. So this requires that you have things like the execution context for stuff like stacks and register contents and stuff like that, existing for that thread on each in each component, it requires that you still have separate protection domains for each of the components. But the logical executable entity persists in its execution between different components, there's only one priority that exists for it, there's only one budget that exists for it, etc. This does mean that as multiple threads, for instance, invoke that middle component, we might need multiple execution stacks and managing how many execution stacks, how to allocate them out at user level. All of this is relatively difficult, but you can see the publication at the bottom for how we dealt with it. One of the key insights from thread migration is simply that because we are not switching between threads as we do IPC throughout the system, it means that we don't need to ever involve a scheduler on IPC, and that enables us to be able to move the scheduling logic entirely to user level. Within the kernel, we provide dispatch, and schedulers implemented in components are able to use that facility for dispatch to actually switch between threads and define their own scheduling policy. Indeed, the kernel barely knows what a priority is. Schedulers define priorities in their user level data structures. Run queues are in the user level schedulers. If a scheduler is executing in this thread and wants to switch over to another thread, it simply needs access to a thread within its capability table, which gives it the ability to dispatch over to that other thread. This is not user level scheduling as defined by a library in traditional systems like scheduler activations, because the threads that you're switching between, dispatching between, might be implementing in any protection domain, right? The scheduler doesn't know that. The scheduler's job is just to decide which to run at any point in time and can dispatch between them accordingly. Because the scheduler defines all of its logic for prioritizing budgets, all of that stuff, it defines its own notion of budget priority for all of these threads. So switching between threads means that it naturally can switch between not just the executable context using dispatching, but also its own notion of which thread is running and account for them separately, prioritize them separately, etc. Now, that's only part of what a scheduler does, of course, right? Yes, a scheduler decides which thread that run at any point in time. And right now, we see that it can do that in a really interesting way. But we don't actually define blocking semantics by the kernel in doing this. Instead, when a thread wants to cooperatively block, it invokes the scheduler. So we use IPC to call it to the scheduler, and the scheduler provides this dispatch. But that's not all that you need to actually implement user-level scheduling. Additionally, you need a source of time. So Composite provides the ability to vector timer ticks up to these schedulers in a controlled way. The schedulers can actually define their own granularity for these timer ticks on a cycle-accurate granularity, well, depending on what the hardware can provide on x86 on a cycle-accurate granularity. But this is actually sufficient for implementing a system. But we also need to actually answer the question, what happens with other interrupts in the system, not just timer interrupts, right? And one option would be for all interrupts to be vectored to the scheduler, and that would solve a lot of problems. But it does have some performance repercussions in the system, because now the scheduler needs to be involved on the processing of every interrupt. And that's possible. We have done that in the first version of the kernel. But to avoid that overhead, we came up with a way where we could actually vector these interrupt sources up to threads that are executing in the different components, which is to say the kernel can be told by the scheduler using a mechanism that we call temporal capabilities, whether when a networking interrupt arrives, for instance, the thread that it's destined for, the interrupt processing thread, should be run immediately, thereby preempting the currently executing thread. And we make these decisions using something that we call temporal capabilities. And I'm going to brush over these. They're a really cool abstraction. One of the abstractions I'm the proudest that we implemented in the system. And you can see the paper for that. A temporal capability is just a slice of time that has been given for execution of a thread on that T-cap on that temporal capability. And a integer that's interpreted effectively, or a set of integers that's interpreted as a set of priorities to decide whether preemption should be made. And the way that the scheduler maintains control in the system is that it is in complete control of programming these time slices that are allowed for execution and the priorities. But T-caps are also interesting because they allow us to implement multiple schedulers in the system. Because schedulers are defined at user level, we might be able to, we might want to have multiple of them so that we can specialize different parts of the system for different goals, performance, predictability, etc. And T-caps allow different schedulers to delegate slices of time to each other, thereby passing the temporal resources throughout the system so that the execution can be done in these different schedulers using that time. So this is interesting because it allows us to create a hierarchy of schedulers or even a full graph of schedulers that are all coordinating by passing time. T-caps are a very difficult abstraction because they want to allow these schedulers to coordinate while necessarily constraining their impact on each other. A scheduler wants to say, hey, you, I'm going to give you some time, you other scheduler, but I want to guarantee that you will not use that time to interfere with this set of threads that I want to be able to execute at the highest of priorities. So this is one of the resources I didn't go over before that we have in the system. And if we think about now the kernel objects, we have threads, temporal capabilities, synchronous endpoints that enabled us to connect these resources and allow thread migration between them. Asynchronous IPC lists effectively that notion of interrupts being able to generate execution. And of course, we generalize that to asynchronous communication between threads and registers that, yeah, sorry, and threads that you see at the bottom here. So the one thing that I want to point out here is that every thread now maintains an invocation stack that tracks the sequence of the different components that that thread has executed between as it invokes the interfaces for each of those components. And of course, we can return from a component, it pops an entry off just like a normal C execution stack. But of course, because this is a kernel resource, it's trusted and cannot be corrupted in any way shape or form. Now, this raises a number of questions, right? It's nice that we have this facility for defining scheduling policy at user level, and it's allowed us to do a lot of research into providing these different domains of goals within the composite infrastructure. But there's just this intuitive notion that doing user level scheduling is probably too slow, right? So we have to answer this question of essentially, is it fast enough to be able to do what we need in the system, right? Does it slow down scheduling to the point where it's useless or it limits the system in a significant way? Well, we've gone through about five iterations of these user level scheduling infrastructures through three versions of the kernel. And we've ended a system with slight that Fani did a couple of years ago that actually allows user level dispatching of threads to be on the order of 41 cycles. So when we're doing cooperative switches, we actually have dispatch on the level of kind of thread based user level scheduling libraries. When we're doing preemptive scheduling, then we need to dispatch from the kernel, but that's on the order of 300 cycles. In the end, this notion of the scheduler being defined at user level, it's not that prohibitive. The policy ends up looking relatively similar to what it would be in the kernel, a little bit simpler because it's that user level. And the dispatching latency is not an inhibitor. But there is another source of concern here because now if we want to do cooperative switches within the system, we don't have a blocking abstraction within the kernel. So we need to invoke the scheduler asking for ourself to be blocked. So is IPC too slow if it's on the path of a lot of these scheduling decisions? And IPC, you know, it's hard to look at cycle counts on their own and get much from them. So I just I add a comparison to SEO4 here, because SEO4 is one of the faster IPC systems in the in the world now. I'd say it's probably Nova is maybe the only other system that I know of that is highly competitive with it. And if we look at x8632 and Cortex A9, you see that for a round trip IPC from client to server and back on SEO4 and composite, composite is competitive, if not better here. So IPC is not free. But it ends up in our observations not being prohibitive and our scheduling latencies end up usually being significantly faster than those in Linux. So in systems where Linux is sufficient, you'd expect this to be sufficient as well. In systems where Linux isn't sufficient, like some of the real time domains, then we have to kind of go into kind of a design mode and make sure that everything is as efficient as it can be. So this is one core aspect of how we think about system design and composite. We want to enable this notion of user level policies defined by components that means removing the policies for things like concurrency from the kernel. That means using things like thread migration. So we've made it practical in many different ways and have demonstrated the ability of user level scheduling to have pretty significant benefits. An additional area where we have tried to essentially remove implicit policy from the kernel is that around scalability bottlenecks as we increase the number of cores within the system. The big question here is can the kernel's APIs and implementation in some sense limit the scalability of components or cause interference between components? If the kernel's implementation of its own primitives limits the scalability of the system in some way, then it is imposing performance policy on those components. Our goal is to essentially say that if your user level code implemented executing in a component is scalable, it can do well as you increase the number of cores, we want it to actually be scalable, which is to say that the kernel gets out of its way. So we take this very strong version of, you know, we really, really want to move the kernel out of the way and define policies in user level and that includes actually not putting prohibitive overheads for things like scalability. And this is difficult because on the bottom left you see we have a component that is coordinating in itself across different cores. And as it makes system calls, the arrows going down and up, if we have, for example, a big kernel lock that interposes on kernel operations to provide the, to prevent race conditions, then we're no longer kind of achieving this goal, right? That one lock on its own means that the kernel is going to be inhibiting the scalability of that user level process. In the center diagram, you can see that there's one component that is trying to coordinate through the kernel, let's say using message passing APIs across cores to another component. And again, this notion of it making those calls and for instance requiring a big kernel lock means that another component executing on that destination core might actually feel the negative performance and predictability impacts of that message passing. And the worst case is on the right, whereby we have two components that are completely isolated from each other executing on different cores, but they inhibit each other's execution simply because they make system calls and there's a big kernel lock that prevents the performance properties of the other core. The other core has some performance impact on the component on that other core. So these are, these are a few of the cases that I motivate here. And the exact same thing actually applies for IPIs. When we think about coordinating between different cores, we think yes, shared memory is one way to do that locks are a way to mediate that IPIs are another, right? If you have a multi kernel based system where you share no memory in the kernel, then this locking concern is not really a concern. However, you often do use things like IPIs to in a vent triggered way coordinate between the different cores. And in I want to focus in on the middle case here, where we have a component that's sending a message to a component on another core, which is causing IPIs on to that other core. But the IPIs might cause execution that interferes with the other component that's executing on that core. So again, this is policy imposed by the kernel mechanisms within the system that place limits on what you can actually define in user level in terms of performance. The same could exist on the far right in an even worse scenario where for instance, we need to do TLB shoot downs, scheduler, run queue, rebalancing, stuff like that where two components again that don't interact with each other and should be otherwise isolated can feel each other's performance impact by these IPIs. So we really want to avoid all of these things and composite takes a very aggressive tone for less and essentially says that to solve these problems, the kernel must be entirely weight free. And that components at user level should entirely be able to control cashline contention within the kernel, which is to say that policies should be able to be defined, whereby two components should be completely guaranteed to never write to a shared cashline within the kernel, therefore should never cause a performance impact on each other with respect to an increasing number of cores. So this means that the kernel can have no locks. This means that the kernel by and large can't really have reference counts. This means that we need to take a very aggressive stance to the implementation of the kernel. So we do this by integrating a scalable memory reclamation facility based on time in a library that we've called parsec and you can read about it in the spec paper. I'm not going to go into it, but yeah, and a few other facilities are what's required for actually implementing this. So this has allowed us to scale up across many different cores in a pretty successful way. And we additionally provide facilities for enabling components to define effectively all every single time that IPIs are used throughout the system. The kernel itself does not use IPIs for any of its mechanisms and instead user level policies are defined for controlling that when IPIs are actually used. Again, we want the kernel to get out of the way so user level can have maximum flexibility. So we've looked at using composite in a lot of different domains and I just want to go through a few very quick examples here where we look at embedded systems on the left, small little micro controllers in the middle and edge data centers on the right. On the far left, we have a lot of work that we've done here and I'm just going to overview some of it mainly from an engineering standpoint. We've implemented a principle of least privilege focused RTOS on the system. We've also implemented a personality for that same RTOS on SEL4 and compared the two. We've implemented net BST Rump kernels on the system, ZenLite driver domains and coordinated time coordination between them. We've ported the NASA's core flight system to be able to do flight control within the system and mixed-criticality systems and we've implemented ways in which you can have different systems, some that are mission critical on the embedded system and some that might just be there for convenience, these mixed-criticality systems and defined ways to coordinate between them so that the less important software on the system can't really interfere with the more important stuff on the system. I'm not going to go into a lot in much detail but I do want to dive into this interesting facet which is microcontrollers and microcontrollers are not a place that you typically think a microkernel would be or a component-based system would be because these have 128, well 16 to 128 kilobytes of SRAM. They don't even have MPUs, they don't have virtual memory. I'm sorry, they don't have the MMUs, they don't have virtual memory instead, they have memory protection units that just allow you to open windows to physical memory like are accessible or not. So we've actually implemented a para-virtualization infrastructure, a microcontroller, virtual machine infrastructure on the system in which we can run multiple free RTOSs on the system or generally multiple RTOSs. And we had to overcome a lot of challenges with lists that required us both to modify the kernel and to be able to modify a lot of those user-level components specialized for this kind of low-memory performance-sensitive type system. And just a very high-level overview of lists, we looked at how an MPU-based isolation versus page table-based software isolation could kind of how we could span the gap between these. And we used path-compressed radix trees to flatten a system of protection into the required limited MPU regions. And because we have these limited MPU regions, we came up with different techniques, some static being able to solve for memory layouts so that we could fit all of the required protection into the required memory, the limited MPU regions. But then we also treated these MPU regions as a cache and allowed as what looks like essentially a software-managed TLB so that we could have an infinite number of protected regions for a component even though we have a finite number of MPU regions. And more recent work enables us to have efficient coordination with devices by doing kernel bypass around interops. And I'm not going to go into that, but it's pretty cool. So this allowed us to run, you know, up to eight virtual machines and 128 kilobytes of SRAM with this system. And I'm not going to go into the details here. You can look at the papers or you can pause. But the whole idea here is that there are some performance impacts for doing lists, but by and large, the microkernel VM is not that much slower than raw free RTOS, executing with no isolation whatsoever. So this means that you can have strong isolation in these small microcontrollers without giving up too much in terms of performance. And we've actually taken it down so that interrupts can actually have bare-metal interrupt latencies in work that uses trust zone M. So we've also, on the complete other end of the spectrum, looked at these relatively large systems that are focused on the edge. And we look at these as being essentially kind of the our traditional data centers requiring multi-tenancy, requiring performance, all of that, but shrunk down to the much smaller facilities that could fit into a network edge, right? That might be a set of racks, that might be a closet worth of information, that might just be a couple of servers. So density becomes very important. We have many tenants and they all need to fit onto the system. But then additionally, things like 5G provide one millisecond round-trip latency. So we really want to be able to have the software infrastructure of the system in spite of this density have very strong latency properties. So we looked at what we might be able to do to implement this type of a system. And if we use processes, well, processes don't provide that much isolation because of full Unix-based APIs. They are relatively scalable. We can get good density with them. And they provide decent setup startup time. The thing is, we want to be able to provide isolation for each client within the system. So we require the creation of new protection domains for every client coming in. So startup time becomes really important. You see these types of constraints a lot in serverless computing. And high performance networking is relatively, you can't really use dpdk and scale up to high density because of the isolation requirements. So high performance networking, you know, you're kind of stuck with what Linux gives you, which is very fast, but maybe not as fast as if you're using bypass. Containers and VMs, on the other hand, provide stronger isolation, but are not as scalable a bad startup time, right? So we've implemented a system called EdgeOS with Incomposite that provides an abstraction for featherweight processes that attempts to provide all of these functionalities. And the goal is that we can provide 10 to 100 gigabits per second processing of packets without sacrificing isolation. Every client that tries to connect to the system should have, should be completely isolated in some separate protection domains. We want the startup of these different clients' computations to be very, very fast. And we wanted to scale up to thousands of least computations per host. So this is a high level diagram of what we've done. In composite, we've implemented components that provide things like dpdk for network access, something that controls the FWPs, the manager for them to create them very quickly, to recycle them very quickly. And FWPs, these featherweight processes can be added into chains of computations. If you're familiar with network function virtualization, that might sound familiar. And then our scheduler is optimized for being able to deal with the high density and fast activation and controlled latency. And all of this is implemented on top of composite. And what we can see here is that if we look at the startup time of these different technologies, that if we look even at just fork and exec within Linux, right? EdgeOS allows FWPs because it's so optimized how we create these things to be about 20 times faster. With our checkpointing and restore support, we can actually activate these even faster in response to new client requests coming in. So responding to a client request in 6.2 microseconds is around the level that we want to be at for these types of high density systems. And because these are so specialized, we support higher throughput than Linux for things like TLS endpointing or proxies. And all of this is despite EdgeOS using message passing throughout this entire thing. We provide no shared memory between the different FWPs. So it's very, very strong security within the system. So I've talked about the system scaling all the way down to microcontrollers. And now I've talked about the system that's relatively large on the edge focused on security, density, multi-tenancy, and performance. And you can start to see how this component-based design can actually span the gamut between all these different systems and hopefully satisfy a lot of future research and hopefully a future foundation for different systems. So if you have any questions or comments, please ask. This is all open source software, so you can find it all and you can find all the papers on my web page. I will be looking for new PhD students probably in about a year. So if you're interested in looking at Let's Work, send me an email, let me know that you saw this presentation and we can chat. Thank you so much. I really appreciate it. Asking great questions and trying to identify kind of where the key aspects are here. We definitely have a complicated policy for managing stacks in user level and that's something that we can kind of redefine in components and we kind of do complicated analyses to decide dynamically how many stacks there need to be. So definitely do read the papers that we referred to, but that is kind of a key policy that we've moved up into user level. Yeah, thank you for all those questions. I am missing the current questions. I could check. Oh, just thanks. Cool. I do want to say we were highly inspired by a lot of what NOVA has done and we've been trying to chase NOVA IPC performance for a long time and I need to do the measurements recently to see if we've caught.
The Composite CBOS is in many ways a traditional micro-kernel. Services and policies are implemented at user-level, the kernel focuses on fast IPC, and it uses a strong capability-based access control mechanism. It has historically focused on being a research laboratory for strange features including a thread-migration-based IPC, user-level scheduling of system-level threads, user-level definition of capability policies, a wait-free kernel that scales linearly with increasing cores, and temporal capabilities to coordinate between untrusting schedulers. It also scales down and supports paravirtualized RTOSes on microcontrollers (with on the order of 64KiB SRAM, between 16 and 200 Mhz, and MPUs). Composite represents a design that deviates from the L4 lineage in some interesting ways. In this talk, we'll discuss the design with a focus on how the system provides the challenging combination of predictability, performance, and scalable parallelism.
10.5446/56913 (DOI)
Welcome to my contribution to this year's Michael Kuhnel Dev Vela Per Room at FOSSTEM. Thanks a lot to Martin Decchi and Sebastian Zumpf for organizing this year's Dev Per Room. So, it's a pleasure to be here. So, my name is Norman Feske. I'm from the company GenoDepps, the developer behind the GenoOS framework. And this talk will be about the last year's journey to bring together the GenoOS framework and the Pinephone. The talk will be structured in three major sections. The first section will be about the underlying motivation of this work. So, why have we even started doing this? The second part will be a bit like a picture story of what happened in the last year. So, I can't go into too many technical details, but I will give you some insights into the whole development experience. And finally, I will give you a brief demonstration of how GenoOS on the Pinephone looks right now. And finally, I will close the talk with an outlook of what to expect in 2022. So, when speaking about the motivation of this work, I have to really go back a bit to a kind of root concern of mine, which is basically a struggle that is getting ever stronger, I think, between really powerful, dominating corporations and the civil society. And the smartphones and the smartphone industry is, I think, one of the most impressive expressions of this power struggle. So let's take a closer look. So what are the motives behind corporations in general? So don't get me wrong, I'm an anti-corporate. I can understand what is how this works, but there is still some kind of problem arising from this. It's natural that companies seek profits, so I'm also running a company, so I know what this is about. So you want to be profitable, and there's also a desire for growth. It's natural for companies. The growth aspect is actually much filled by investors, for the most part, who demand growing revenue to have their assets increase their values. So what to do about it, of course, one way is to increase the customer base and the other way is you also have to keep the existing customers paying. It's clear. And whenever possible, raise the margins. That's basically a cradle. And what to do to keep the existing customers paying? It's called customer retention. So one way to do this is to leverage platform effects. So let's say if everyone is on eBay, then all sellers and buyers are going to eBay. So that's basically one thing that big platform companies strive for. Another one is to create artificial dependencies, so that customers have a really hard time leading the platform, and another strategy is to increase or introduce new complexity and then offer the aid to make the suffering of the customers less. So the basic, it comes down to making your customers addicted to you, to your products, because addicts are the best customers. And to make this even more efficient, it's always good to have as much information as possible about the customers, the market, the competitors, and so on. So there's a corporate motive to seek holistic knowledge without any bounds. And when it comes to smartphones, it's really easy to see that people are really some kind of addicted to smartphones. Actually smartphones play an ever more important part of our lives. But smartphones are a need of constant attention by the platform companies, or I would call it medication, because there's the sole story about the security. You have to keep your device updated. You have to keep it healthy. So you have to subscribe to some kind of service from a platform vendor to get new system updates. You have to use the platform provider's competence to save you from bad behaving applications. So there's this whole curation of apps. And you also expect the platform provider to entertain you a bit like following the latest fashion. So there's a constant stream of new supplies of kind of medication. And today's situation to corporations really dominate this whole business. So what about me as a member of the society? For a long time I have really not used a smartphone, and I'm still not a smartphone user, but this is not a long term sustainable strategy. So I want to participate in digital society, of course. So for example, I want to use a smartphone to look for the timetables for trains, or I want to use a COVID tracking application or things like that. So I'm not opposed to any kind of convenience that a smartphone provides. So I want to actually enjoy this utility value. But at the same time, I value my digital autonomy a lot. So I don't want to subordinate myself to corporate interests. What this breaks down into is that I don't want to have any changes on my devices without my consent. So I don't want to have questions like, do you want to install the update today or tomorrow? That's not a question I want to answer. I want to be in control of the device. I also want to be treated like a human being. So I don't want to be exploited. Like my attention should be mine. I should not be forced to look at advertisements. I don't want to be tracked. I don't want to data mind or anything like that. Finally, I want to keep my communications private by default, unless I decide otherwise. And I want to keep my personal data for me. And another aspect is that I'm always looking for sustainable paths in my way in my life. So in one way, that's the environmental footprint I leave. And the other way is that I want to keep learned skills usable. So if I learn how to operate a device, I don't want to learn it again in the next years. So there's of course a lot of conflicts of interests between those two goals. So first, I think that this whole trend towards subscription-based business models, be it cloud services or this whole security update story, is really a taxation of digital life. And people are slowly getting used to it, and they just transfer money over to corporations in a periodic way. At the same time, data gets more and more centralized into kind of the silos controlled by big companies. And those are the same companies who basically look through this data and make out the patterns, analyze this data, or even give this data to third parties without the consent of the end user. So the corporations have been extremely dominant and powerful in these areas. So and as another kind of aspect to it, because of the whole consumer cycle, that the pressure to always release new electronic products and to have customers buy new versions of the devices, a lot of electronic ways is generated. So these are all political problems. And ultimately, the micro-cronodevroom is not the right place to solve them. That's clear. But on the other hand, we as technologists, we can at least give politicians some kind of arguments that there may be alternatives pass forward. And this is the whole story about where my motivation comes from. It's not a kind of spleen that I only have for myself. There are a few others who share similar sentiments. So one prominent example is the precursor project by Bunny. It's kind of extreme approach to a mobile phone that takes open hardware and open software to an extreme. So the hardware is based on a custom SoC built in an FPGA and the software is completely developed from scratch. The device is deliberately a deviation from smartphones. It's more comparable to a feature phone, which kind of has a deliberate decisions of these developers. But what about smartphones? When looking at smartphones, one company really comes to their attention, which is the Pine 64 company that produces the Pine phone. And this Pine phone is marketed as an open source friendly phone, catering to the open source developer community, especially the Linux community. So the device supports the mainline kernel. There are diverse distributions. The SoC is a bit old, but it's well understood and it's quite well documented actually. And the nicest thing is that it's readily available and quite affordable. So it's a good device for tinkering basically. So that's really attractive. But open source is not enough. So even if we say, ah, we use a Linux operating system on top of this phone, isn't this all good? No, it's not because the complexity of this whole software and hardware combination defeats autonomy. So still when using a Linux system on such a phone, you make yourself dependent on this distribution and you have to be faithful in this software stack that is just incomprehensible complex. So we notice on Linux, for example, I'm using Linux for decades now. And whenever I update my Linux distribution, I think of, ah, hopefully nothing changes too much. And so there is a lot of phase involved and that's something we cannot overcome really with such complex software stacks. And finally, there is also no queue inside for this whole security update. Sorry. So this is something we have just have to accept. So of course, when looking at a G node, this is something that we at least promise to change. So when structuring the system around the idea of the principle of least privilege and sandboxing all tiny operating system and application functionality into small sandboxes, then this whole situation becomes much easier. There is no need to create applications or there's also much less need to keep the system medicated all the time. So it's more comparable to kind of cure than the relieving of symptoms. So that I think can be a huge contribution when combined with smartphones. And so I made it my mission to combine G node with the pine phone in the last year. So my aspiration is really to have a replacement for a feature phone, but also so basically featuring telephony messaging, but also doing being capable of using a web browser and encrypt doing encrypted communication and crypto storage. And also what's important to me is a half a decent battery life. So currently I'm using a Nokia phone, an old Nokia phone, which has a kind of weak of battery life, which is really nice. And I want to have this also in the future. I don't want to address any kind of entertainment or comfort functions that's currently out of scope. So how was this whole process about? So how did I start it and what looked at it at the different times? So when starting such an ambition, we first have to look into the kind of the way of how the operating system boots on a platform. So when the system is powered up, the bootloader takes over control, which is U-boot in our case or in the case of the pine phone, it's a default bootloader. And then this bootloader boots the first F image, which is basically a program that runs in physical memory. It's really called Bootstrap. And then from there on, this bootstrap program bootstrap to kernel and virtual memory and so on and so on. So the first thing is really jumping from U-boot into custom piece of program that gives us any kind of life sign. And so this is basically the first small challenge to come up with a small program that can be started directly from U-boot and gives us some characters on the zero line. So this is how it looks like. There are a few lines of C code, very simple, directly poking memory map dredges. Here for outputting some characters and then a small kind of bunch of instructions to compile the code into an object file loaded that can be loaded directly into the device memory. And you see then the results and this can make one really happy when this happens, because now we have passed the control to our own code. So the next thing that we aspire to do is to port the kernel, the microkernel. And here the all-winner SOC of the pine phone is really nice to us because it consists of components that are readily supported by the existing base HW kernel of the Geno2S framework. So it's a pretty aged by now 64-bit ARM SOC. It uses the standard ARM GIG version 2 interrupt controller, generic timer infrastructure, very simple UART interface and of course a custom memory layout that needs to be configured. But all in all it comes down to basically mirroring the platform support for an existing SOC, so for example the IMX8 and adjusting the files for the pine phone, like picking the right interrupt controller and setting up the physical memory layout. But this basically takes I would say a week or so, this is not a big challenge. But the biggest challenges come ahead of us, so the next small exercise to get the test met with the hardware is to access some kind of device hardware. So the first tempting thing to do is to run some kind of GPIO controls on the chip, so to toggle some pin state or to sense some pin state. And so for this small test program can be written that uses genotes, basic services like IQ and MMIO to access memory map registers and receive interrupts. And this is really fun because this draws the connection between the physical world like we have here, a pin that goes into the SOC, a physical thing that is basically a wire and it goes into the SOC, then it goes through a bunch of logic and then there is some device inside the SOC that maps this to a bus that's on the system bus, which then can be accessed by the software here. And the same thing for the interrupts, so this is interesting how this physical pin is basically wired up to the interrupt controller and then ending up in the micro chronos IRQ handler to the user application. So the nice thing about this step is that it really lets you see the physical effects of some piece of software, which is quite nice. So now the next thing when having this basic test running was to bring a bit order in this whole situation. So instead of allowing this program to access all the hardware, we introduced a platform driver that basically assigns a certain specific memory map IO ranges and interrupt numbers to this specific program. So there is a kind of access control enforced by the platform driver. And as another step, we go further, we put this or change this program into a proper driver with a driver interface that allows other programs to control individual pins. So here what we see is the strength of a microkernel based system. We have a cascade of authorities that can limit the reach of the programs. So here at the bottom, we have the kernel and the core component of GNOT with ultimate authority over the whole platform. Then in the next step, we have the platform driver which has handed over all the memory map IO resources and interrupts. So it has authority over all devices. But then it gives only a part of this authority over to this GPIO driver over here. And this again can assign a part of its own authority to a simple program. And so the actual program like this LED Pulse program here can access only a one specific pin in a specific mode, which really comes down to the principle of least privilege. The experimentation setup looks like this. So you can have some wires here, some resistors and an LED and can play with it. And this is all a lot of fun. And by the way, this board what you see here is this Pine A64 LTS board, which is mirror basically the Pine phone hardware and is also available at the Pine 64 store. So the next step up is the problem of dealing with real devices, not just filling with some GPIO pins, but some real drivers. So when looking at real hardware, real drivers, you see that on ARM as a C's, the situation is much more chaotic and less regular than on PC hardware, which basically is organized around the PCI bus. On ARM as a C's, it's more chaotic and you have to, you are faced with a lot of complex interplays between device details like the power regulators, clocks, reset lines, a variety of buses and many, many quirky details. And added to that, there are certain pin functions that can be selected for the SSE, which are at another level of obscurity, I would say, but it's of course, it's a nice feature of these chips, but it doesn't make the situation any easier. And on top of that, the documentation is often not available or quite sparse. So the only reliable reference to working systems is really the Linux kernel, because the SSE vendors are usually the ones who write drivers directly for the Linux kernel. And so we are now faced with the situation where we can look at the device trees that are shipped with the Linux kernels as some kind of hardware documentation. And the cool thing about this is that this documentation is really the ground truth, because it's actually kind of executed. So if there was a mistake in the device tree, then this Linux kernel would not work on this hardware, so that's quite nice. It gives us a lot of information about the hardware. We also found that because of this whole interplay between the different devices and the clocks and resets and so on, that the porting of the drivers is much more feasible than the developing of the drivers from scratch. So we concentrated completely on this idea of porting complex device drivers from Linux to genote. So when starting this kind of idea, it's useful to start with reasonably small example, like an Ethernet driver, and we will come to bigger or more complex devices later. So when looking at Ethernet and looking at the device tree for the Pine AS64 board, yeah, it looks a bit like this. This is a graph visualization of the device tree. It's basically incomprehensible. So to make some sense out of this, we came up with some custom tooling that processes this device tree files, and where we can say we are interested in a specific part of the hardware, in this case the Ethernet controller, and want to only retain things that are related to this part of the hardware. And here the tool spits out this smaller device tree which is in itself consistent, but consists only of the parts that are relevant for networking. And this becomes suddenly much more clear and can guide our way for porting code from Linux. Speaking about porting code from Linux, one intermediate step I always did is basically first trying to build a custom Linux kernel that is as bare bones as possible and only drives the hardware that I am after. So in the current case, a Linux kernel that drives the network but doesn't know a thing else really. The usual process is to start with a tiny config of Linux kernel, which is basically a Linux kernel that doesn't work really. So it compiles but it doesn't run, and then try to find the configuration values for the Linux kernel that produce a kernel that performs a bit of network functionality, like issuing a DHCP request. And to aid this process, we can look into the device tree, find information there about certain compilation units. From this compilation units, we can learn config options that are conditions for this compilation unit, so we can add these configuration options. We can also let our intuition guide us. So for example, it is logical that the internet controller also needs a food driver, so things like that. And also of course some kind of rational thinking. But in the end, there is a gap here in the middle where we can't really guess what kernel configurations options are needed. And for this, I came up with some kind of sledgehammer approach, basically bisecting the configurations of Linux to try out the minimal set of configurations. So each of these config options you see here require several kernel compiles and boots of the kernel on the platform. But it took maybe a day or maybe even two days to find out these options. It's quite a boring task, but it's terminating the process. It's terminating. And it leads us to these kind of minimalism, which is really nice. Because in the end, when we have this kind of small set of kernel options, we have a really tight Linux kernel that we can work with. And then it's time to transplant code from Linux to the G-node system, put it into a component. And here, we came up with a new version of Linux device driver environment, which allows us to use unmodified Linux kernel code here and combine this with a small kind of emulation environment of certain Linux kernel mechanisms like memory subsystem and in-it call handling and things like that. Then a small API layer for certain G-node interfaces that map the C++ interface of G-node to C interfaces. And then finally, we have the G-node API. On top, we can use the services then of the driver. And on the bottom, we can access the device. And so this is a very rigid scheme. So for example, we keep really C++ and C galvanically separated. So next thing is, so basically to wrap this up, once I got the network driver working, I switched my attention to the PinePhone as a development platform because that's my actual goal. And so I looked about the process, how can I work with the PinePhone? The cool thing is, it can boot directly from the SD card, so you cannot really break anything or break anything. You can always swap out an SD card and start from new, which is really nice. It's also cool that you can basically repurpose the audio jack to become a zero line, try just flipping as far as small switch, and there is an accessible reset button, which is really cool. For the workflow, some things should be considered. So for example, SD card juggling is not really convenient. And you look at these kind of Linux bisecting experiments, you need to boot the system like hundreds of times, and juggling an SD card each time is not viable. So other strategies are needed, and we can of course also not use Ethernet, so TFTP is out of question. So I looked at UBoat's fast boot support, also with the help of an old friend, Ivan Loskutov, who give me some really, really valuable hints. And so I could automate much of the workflow and also combine this with some custom automation tools in Genotes Toolchain. And as a small detail, there are these tiny things. So for example, you have this reset button in the back of the phone here, this is a small hole. So you are supposed to fiddle with a small toothpick or so in this hole to reset the phone. But when developing, for example, a touchscreen driver, you have to split to flip the phone for each test, which is really not convenient. So I came up with this small life hack here. So I had a small screw laying around, and by putting the screw inside the hole and flipping off the phone, the whole phone becomes a big reset button, which is really nice and extremely satisfying. So these are the small things that make life so much better. Yeah, now looking at the display driver, you see again this kind of part of the device tree that is concerned with the display. You see that it is a lot more complicated than the Ethernet part. So there are lots of questions to answer. So the process of cutting down Linux is more or less a mechanical work. It came really down to this small set of configuration options here. So this is actually enough to bring up this crippled version of the Linux kernel, which basically just presents us hooks and then to crashes, maybe. But it doesn't matter because it has successfully initialized the frame buffer, and that's what we wanted. So that's really nice. And so we can go on and select the driver sources that we find in the object files and look at them, consider them to include into our component, then compile and link the result. And if you see any unresolved symbols, we can generate dummy functions for these symbols automatically using custom tools that we developed over the last year. And yeah, and when such a function is called, we look closely to the called function supplement custom emulation code and try it again. And this is basically an iterative process that basically eventually leads us to this result. So a working graphics driver, which is extremely satisfying. So don't get me wrong, it's not all roses. It basically takes sometimes really long time to come up with this, but there is a very structured way of doing this. So now we have this frame buffer driver running on the platform. And when we can really see all the kind of devices that are accessed by this monolithic frame buffer driver, and this is really staggering. It also expresses how big intertwined all the different driver parts of this arm SOCs are. So the problem arises that when we try to add another driver, so for example, besides the frame buffer driver, we want to also have a touchscreen driver. So until suddenly we see a big conflicts, so that the same drivers want to access the same hardware resources. And this is of course not possible or not getting any useful results. So now we have to start the process of basically cutting those dependencies here and basically either adding emulation code in here and the drivers or putting in intermediate components that basically reconciled the access of those devices. So this is how it looks like in the end. So basically everything that's related to clocks, power and reset is basically put into a different driver component and both drivers access this higher level driver component and all things that are related to general purpose IO pins are basically going through this component over here. And now we have nicely reconciled those both drivers on the platform. So this is really the whole micro-colonial idea here. We now know exactly which parts of the SOCs are accessed by which driver and so we can also assess the reach of the driver on the system, that's really intuitive now. So now that I have covered these kind of small story about how the process went, let me give you a short demonstration. So as you may have guessed, I'm actually using genotes right now. So this is basically a scope-based system here. So this is my site program here and here I have a normal Linux, an Xubuntu Linux running inside a virtual box and the whole system runs on top of the Nova hypervisor. And in this terminal up here, I have connected this to the ZLine of the pine phone and over here I have a build directory for genotor for ARM V8. And now to give you the look on the pine phone, I will basically start a component to look at our webcam. So I will just deploy a small program here that basically captures the picture from the webcam and displays this in a window. And you can see here that this window appears here. So we can see the phone. So yes, some dust. Okay. So now I can issue the command to... So the phone is actually already switched on. So you see the red cable here. So the phone is powered. It's currently stuck in the U-Boot loader and it's waiting for a fast boot command. So here I can give it the command to basically run this Sculpt test run script. That's basically part of the genot tooling, targeting the base HW kernel, the board pine phone and also direct the log output to the core component of genot. So let's give it a try. So now there's not much to build. So we just integrate the image. And in this case, the image contains also some kind of pre-loaded software. So it's a bit larger than normal. So this takes a few seconds until the image is generated, but not too long after. You should see here that the system is booting up. And on the webcam, you will briefly see... Here it is. You can see that the user interface has been started. So you see now the same Sculpt OS that I'm running on this machine. So basically when I go back here, you see this same kind of system that I'm having on my machine here. The same system basically runs now on the pine phone, except for some details like no persistence storage. But for example, I can basically click on this or touch on these different user interface elements and user interface response to this. Also for example, when I touch the settings bar, you see you can basically change also some user interface settings. And one specific interesting thing is that I can deploy some scenario from memory. So I can use the RAMFS. So sorry, you cannot really see it. But I can tell you that now the system is deploying a genotes subsystem as a deployment. And here you see that some components have been loaded. I can switch to this kind of desktop by pressing it to the side here. And you see here there is a small keyboard here and a terminal over here. So I can basically... And there's actually a window manager running. So this is of course still inherited from the desktop version of Sculpt. And I can even drag these windows around and use this small keyboard. It's hard to see in the webcam. But I can basically type in a small command. For example, I can type in something like LS and return. And you see that this Linux... Not Linux, but it's a native genote-based UNIX environment. It's basically presenting some results. And I even can, in principle, also use things like Vim to edit all kinds of files on the system. And so, for example, also access the configuration file system of the Sculpt OS. So basically this gives me complete control over the device. Let me see. Q. Q. Okay. There's also a smaller demo I can show you. That's basically just rotating... Oh, maybe I haven't... Yeah, I failed to reach it here. I still have to train my fingers to type on the touch screen. So here a small program is coming up that basically... You may recognize this from previous talks. That's basically a small software rendering demo running now directly also on the Sculpt OS. And the nice thing here is that all the underlying infrastructure, like the package management concept, the deployment concept, the configuration concept, etc. It's all the same things, the same components as we use on the normal system on PCs. Okay. So that's basically what I can show you right now. So now to close the talk, let's give some ideas what's next. So the vision for this year is to have a video chat scenario running on the pine phone. So I want to chat with my family, for example, using this device, using G-Node on Sculpt. And there are of course a lot of challenges in the way. So these are just an animation of things we have to work on, which is at the same time exciting, but also a bit frightening, of course. But we are really driven towards this goal. If you want to follow the progress of this ambition, you are invited to visit the G-Node website, which is basically a kind of blog for various genome developers. And I am publishing an article series called Pine Phone, where I report on the progress that I make. And in the end, this article series will be integrated into a large document that's called the G-Node porting guide or G-Node platforms document, which will give a kind of guidance to future developers who want to bring G-Node to other SOCs. So for example, if any one of you is interested in the new Pine Phone Pro that is based on the Rockchip SOC, then this documentation is probably the best pathway to develop this kind of support. So that's already the first version of this document online, but it will be updated in May with the new findings. Okay, thank you very much for your attention. And that is the end of my talk. Thank you. So let's start with some questions. There are some questions in the chat. You or the others have partially answered them, but maybe we can just repeat them for the stream. So there was one question about the base HW. So could you just tell us what that actually is? Yeah, sure. So yeah, first thank you for organizing the deaf room together with Sebastian this year. So yeah, yeah, about the strange name of this kernel. So it's really the result of this naming scheme of G-Node. So G-Node, the G-Node OS framework supports quite a few different kernels. So you can use the G-Node OS framework on top of traditional four micro kernels or also on the Nova kernel or the SEL4 kernel or the Fiasco OC kernel and also on the Linux kernel. And at one time, like seven years ago, we had the idea to come up with a custom micro kernel that is actually much simpler than compared with the traditional kernels. So it has a kind of different software architecture that is glued more closely to G-Node. And by doing this, we expected to reduce the complexity of the base system even further. So we already had a pretty good or a pretty tight base system with the existing kernels, but we saw the potential that we could reduce it further. And so we came up with the base HW kernel. And the notion is that now G-Node runs on the real hardware, not with the kernel underneath, but the G-Node's low-level system is touching the CPU hardware directly. And so it's called base minus Linux or base minus Nova, but base minus HW, which means it accesses the hardware directly. But it's still a micro kernel. And in practical terms, it's your native micro kernel. Yes. It really fits your architecture perfectly. Exactly. Yes. Okay. So it's a small question, maybe just a clarification. So the demo you have shown, that was really running just on the phone. There was nothing running on the desktop machine. Yeah. So maybe it is the demo where I executed first G-Node on the machine, on the laptop, and then on the phone. Well, it's a bit misguiding, but yeah, the phone has a complete separate system running. And it's loaded as a single image through the fast boot channel to the phone and then executed standalone. So it's running autonomously on the phone. Yes. Wonderful. Another quick question. Do you plan to support the PinePhone Pro as well in the future? So not immediately. So the idea, so I have to come back a bit. One idea behind this work is that we will go, or the idea was to go through the process of plotting all these difficult technical challenges, but not only doing this, but also implementing all these steps in very much detail. So the idea was to, at the end of the whole process, there should be something like a book, I would say it will be like 300 pages or so, that explains in detail how this kind of process works. And so I hope, that's at least in my hope, that others will find this kind of idea inspiring that now someone else can also take this book and take a new hardware and apply the steps of this book to get G-Node running on this new hardware. So that's my goal to cultivate some kind of community around this idea. And so I put a lot of work into this documentation. So I would say more work was spent on the writing the documentation than on the actual technical work. And so I think that it would excite us very much if someone else would step up and take over this idea to bring G-Node on the Rockchip, for example, which is the chip used in the Pinephone Pro, and we would give assistance, of course, but the ownership of this project would be somewhere else. But anyway, if there's some commercial interest in that and someone would approach us, then of course we would be up to it to do this. Another point is, I think that the Pinephone is appealing because it's a limited platform. It's universally known to be not really strong, it's a bit dated. And if you look at the Linux systems on top of this platform, they do not perform really so well. So it's usable, don't get me wrong. I admire what's happening in this community, but it's not comparable to the user experience that we see on commercial phones or on consumer devices. And I think if we managed to really rock the phone with G-Node to bring a really nice user experience to the phone, it also would nicely allow people to compare the beauty of microkernel-based systems and the performance of our system to Linux to basically despell the myths that microkernel systems are slow. So if it performs better on the phone than Linux, then I think that would be quite exciting. But I cannot promise this, but that's my goal. Yes, it definitely sounds very intriguing, I should say. And going back to the UI, there was a question in the chat as well in that direction. So clearly the desktop UI is really not a good fit for this touch screen, small touch screen. So do you have any ideas or plans towards having really a smartphone native, you know, touch screen native UI? Yeah, so there are several ideas. I think it will be two-staged. So there is a one-stage where we have some basic interaction that gives the user control over the phone, but not much convenience. So that's basically what this user interface that you see right now presents. It's very small in terms of bytes. So the image is very tight and very small. It puts up quite quickly. And then there will be a second layer on top of that where everything is basically up to the different use cases. And for example, I think that it would be really nice to bring QT-based applications to this phone on top of G-Node. So for example, there is the SailFush OS that uses QT for applications. And so the applications on this phone on this OS are really native applications using QML and all the kind of beauty of this, what you expect. And there's nothing that's good to speak against leveraging these kind of frameworks like QT-QML on top of G-Node. And that's what you would then present to the end user, of course. But in the intermediate step, of course, we need to have some nice ways to interact with the phone as a developer. And so here are our kind of intermediate component like this touch screen keyboard comes to play. I think nobody really will enjoy it this, but it will have its purpose for our development and for hacking with the device, playing with the power management with the modem and all these kind of things. OK. There are several more questions, but sadly, the time is running out. So to prevent your answer being cut off, I would like to thank you again for this amazing talk. We are definitely looking forward to further development in this area and in G-Node in general. I encourage our attendees to ask you more questions in the chat. And thanks once again. See you. Bye-bye. Thank you.
Driven by the vision of a truly trustworthy smartphone, I dedicated the past year to bringing the component-based Genode OS to the Pinephone. The talk presents my experience story, touching on the hardware, booting, the porting of the kernel, component-architecture concerns, and device drivers. Smartphones have become a commodity almost everyone relies on. With the convenience, however, comes complexity that is impossible to comprehend and constantly changing. The opaqueness of hardware and software puts the user in a subordinate position, making their devices - and by extension many aspects of their life - dependent on the decisions of a few dominant corporations. Our personal devices are constantly changing under our fingertips. Steady updates are presumably needed to stay secure, similar to how medicine is needed to stay healthy. But are the incentives of the platform providers aligned with my interests? I want my digital life healthy without a constant supply of medicine! To reinforce trust, both hardware and software must become transparent, traceable, and tractable. The Pinephone satisfies the urge for transparency of the hardware, thanks to publicly available schematics and documentation. However, the predominant software stacks - even though based on the open-source Linux kernel - are practically inscrutable because of their immense complexity. Genode's rigid component architecture promises to bring order and clarity - and thereby trustworthiness - to the software. Over the course of the past year, I pursued the combination of Genode with the Pinephone, diving deep into the Pinephone schematics, the SoC, booting, Genode's kernel, and device drivers. In my talk, I present the experiences made, touch on the use of Linux drivers directly on Genode, and draft a plan forward. The talk will be garnished by a demonstration.
10.5446/56919 (DOI)
Welcome to my first ever FOSTEM Talk. Today I want to talk about GNOME calls, all the building blocks that it is made of and briefly explain what it takes to place and receive phone calls. So who am I? My name is Evangeles and I've been a long time Linux user and free software enthusiast and what got me started contributing to free software was basically receiving my pine phone shortly after FOSTEM 2020 where I got involved with the Mobian project. Today I work as a free software developer for Purism on GNOME calls. So what is GNOME calls? I hear you ask. It is a dialer application so that means you can place and receive calls with it. It is written in C using GTK and G-Object and you can use it both for regular telephony calls and voice over IP calls using the SIP protocol which is a relatively new addition. Now let's have a broad overview of all the different things that are involved. So we obviously need to have some kind of user interface. There we use GTK3 and Lippenvi and also LippcallUI which I will get to in a few seconds. Then we obviously need to talk to our modem for that we use modem manager. In the SIP case we went with the Zofia SIP library and obviously we want to be able to actually hear something when we are doing a call and that is where a call audio and whiz come into play in the cellular case and the G-streamer in the SIP case. So how does it look like? On the left side you can see the recent call history which is pretty standard stuff. On the right side there is a dialpad which you can use to call arbitrary numbers as long as they are valid. I've mentioned LippcallUI. This is basically a library that was born out of the necessity to share some UI widgets between calls and the phone shell. Since you usually want to be able to hang up on an ongoing call on the lock screen, this is basically where it all came from. And it's a private library which both calls and phone are sharing. Now let's go to the modem manager side. Basically what it does is it has a daemon which talks to the modem hardware and exposes a number of objects over debuts. On the application side we use libmm-glip which provides a convenient API to talk to the modem manager daemon and it gives us G objects for all the different things like our modems and calls and so on. And how it's roughly used is we watch for the org-free desktop modem manager 1 object path and look at all the objects that are exported under that path and if an object implements the modem manager 1 voice interface, that is basically a voice capable modem. And then we can go on. So we have this, like the corresponding G object would be an mm-modem-voice for a call. We have an mm call and for example if we want to start an outgoing call we would invoke something like mm-modem-voice-create-call which returns an mm call. In the outgoing case we would then call on the mm call the mm call start method or for an incoming call the accept method hang up when we're done talking, send DTMF tones and so on. Now when since modem manager doesn't deal with any audio we need some more stuff. We are using call audio which is a daemon talking to pulse audio and exposing some controls on Dbass. It allows us to switch audio profiles, enable speaker, mute microphone and from an application point of view we're using lib call audio which gives us easy to use API to talk with the daemon and these are some of the methods that are in the public API. Then there's also WIS. WIS uses loopback devices to set up audio routing between the host and modem. So when your modem exposes its own audio interface in the system that is why it's not used on the original Pinephone because the audio routing is done in hardware and how WIS basically works is that it also watches for any new calls on Dbass and once then an active state let's say this setup of the loopback devices will be done and your audio will be flowing from your modem to your microphone to your modem and from your modem to the speaker let's say. Now let's look at the zip side. So Fia zip is basically a lot of things or brings a lot of things. It's a library for dealing with a ton of zip related stuff so you have a parser and a module for zip offer answer but what we're looking at from a high level perspective is it's a user agent and does the signaling. How we're using it inside of calls is basically we provide some callbacks for different event types for example for incoming invites so for an incoming call or as a response to an outgoing invite we provide certain functions that we want to have called and also Fia zip comes with an additional helper library which provides a g-source for easy glib main loop integration. How it basically works is when you want to register to a server you call the new register function give it some arguments like your username and where to send or which zip server to register to and then you get some zip messages sent. This is cut short a bit just for brevity's sake and next up let's look at how it would look for placing a call. Again you have an invite call tell it who you want to call and in the SDP string you give it the session description so which type of audio codecs you speak and you get a message which looks something like this in the lower part you have the body which in this case for example tells your peer where you can be reached and what kind of codecs you speak. The actual audio will be done using gstreamer so in the zip case you have basically RTP so the real time protocol for sending audio over the network also works for video but we are only doing audio as of yet. Basically we have two pipelines they are obviously written in our C code but if you are familiar with the GST launch command this is roughly what it takes to set up the sending pipeline so where our microphone input gets encoded put into RTP packets and then send off over the network to some host and the reverse for the receive pipeline where the receiving audio getting it out of our RTP packets decoding it and then playing it back. I've mentioned earlier that there are some or I've showed earlier that there are some other libraries that we are using and first of all no dialer application is worth its name without some sort of contact integration for that we are using libfolks so that gives us basically from the evolution data server backend a list of all known contacts we can use it to query the contacts by phone number. And there is libfeedback which is also very important to have it provides us an easy way to have feedback hence the name it gives us audio feedback so ringing, haptic like the vibration motor and visual feedback so your LED turns on. And there's also libsecrets which is only used in the zip case and is basically used to store and retrieve credentials. And last but not least we have libpiece which provides us with a plugin system that we are using basically all the different backends that we can use are written as plugins which are DL loaded, opened. And what brings the future there's a couple of things which are currently missing in the cellular case we don't yet support supplementary call services like call holding, call waiting, transferring calls and so on we also don't support conference calls yet we don't have a convenient way to access our voicemail and in the voice over IP case we currently only do unencrypted media and we are also lacking video calls. And if you want to learn more check out the repository over at the GNOME good lab instance and look at the documentation and if you have any questions feel free to get in contact via email, matrix, all the fatty verus. And lastly I want to thank Arnaud for creating Mobian, Henri Nicolas for helping me with the ambient packaging, Guido for all the code reviews, Purism for building the Fosh ecosystem, GNOME for generously hosting the project and being a helpful community and all of you who are listening. Thank you. We are live now. So first of all thank you very much for the interesting talk. We have four questions at this time and people in the room please don't be shy, ask away. Start from the first one. Will Sofia Sip be used to or can use it to help support VoLTE? While it's definitely true that VoiceOver LTE is using Sip that is actually done inside of the modem so calls doesn't explicitly use Sip in that case. Yes, in my understanding there is no way to bypass so to speak the modem and have direct access to the SIP server. Yes, that's correct. I mean maybe for, I'm not sure how this relates to say something like the open firmware from Big York on the Pine phone if that needs to be explicitly handled in that case but from my understanding all of this stuff needs to be done by the modem itself. Thank you. Next question is would it be possible to use Sopya Sip to bridge calls between modem, manager and matrix? No, not as such but what we can do is there are some parts of the code mainly the G streamer pipeline for handling all the audio and potentially video. That part could certainly be reused when we are at some point supporting matrix but the Sopya Sip is primarily about the signaling and that is very SIP specific but of course you have all these things like the session description which are common for these sort of voice over IP sessions. Okay, thank you. The next question is from me, can you provide some more details on what the auxiliary libraries do? Yes, sure. So LibFolks, I think most of the things that I'm using it for I've covered in the talk itself so from a high level perspective it just is able to query evolution data server and it aggregates all your contacts into so-called folk individuals and that is basically what I am given if I ask it to give me all the contacts and I use that to then fill up the contact box and the same or a similar thing applies when we are querying folks for any number so we give it a number and we want to get out the contact that is associated with that number. Then maybe one thing that I forgot actually to mention is we are also using a GOM. GOM basically handles all the details when dealing with our SQL or SQLite I should rather say a database where we store all the previous calls that we have so that is basically where the history comes from. Thank you. Moving on, does the N8XX support the SIP codes? So an XX I am assuming that is basically all the different Nokia N8 something devices. I am afraid that I can't really answer that. The best answer I can give is I would assume so because the Sofia SIP library was originally written by Nokia people and so I would be very much surprised if they didn't use that on their devices. Thanks. Moving on, is there any issue when using Pipewire with calls? I have not actually run this. I think the Manjaro people are actually using Pipewire and seeing as you usually have all this compatibility layers like Pipewire dash pulse, those should basically be able to handle this sort of thing. Now if we are looking at something like call audio, I am not entirely sure if all these features like switching to the speaker and so on if they really do work. But I would guess if you are only looking at the actual call audio that should be working. I hope someone correct me if I am wrong. The Wildpac wire is very new so it is still rough around the edges. I think we have 30 seconds for a last quick question. If there is matric support on the roadmap for non-calls? I definitely think that this is something that we want to have but when exactly that might happen. I am not sure, since Chatti is using matrix and there will probably have to be some sort of, we need to flash out the interactions there. But at some point it will come. Okay, the live broadcast is finishing here. Thank you very much for your time and we can stay here in the room.
In this talk we will take a look at the anatomy of GNOME Calls. We will cover libraries used and how Calls interacts with them to provide call functionality and other things you'd expect from a dialer application.
10.5446/57159 (DOI)
Hello, as a developer of PaddyWeb, I'm going to give you an overview of this platform, which is a platform for automated extraction of animal disease information and from the web. So what is PaddyWeb? It initially comes from the need of the VSI, the French Epidemic Intelligence Organization in France, which aims at identifying, monitoring different sources in order to identify different signals of sanitary dangers in animal health. And in order to do so, they have to monitor different types of sources, such as official sources, OIE and PESI, but also a number of experts. And they also have to monitor the web in order to find potential relevant news containing such signals of dangers. And this is where PaddyWeb is coming because this task is huge and demanding. And PaddyWeb is here to help them automate this process. So as a result, the first definition for PaddyWeb is that it is a tool for monitoring online news for the detection of emerging animal infectious diseases and the extraction of outbreak information. As compared to other similar tools, PaddyWeb focuses mainly on animal health and has a fully automated pipeline based on multiple machine learning methods. And as for the challenges encountered in the development and design of PaddyWeb, it's important to notice that first, it's a very generic and highly customizable tool. Even if I said before that it was focusing on animal health, this is true, but still in the development of PaddyWeb, we always had in mind that we wanted it to be generic so that it could be applied to other domains of application than animal epidemiology. Also it uses a lot of existing tools and libraries as much as possible in its processing pipeline. And it's a task also to combine them and keep a pipeline that is complex but still is a robust processing pipeline. And finally, another challenge in the development of PaddyWeb is that we have different types of end users. We can have domain experts, for instance epidemiologists, but we can also have data scientists as users or even developers. And of course, they expect different things from PaddyWeb so we have to provide them with these different outputs. So here is the PaddyWeb processing pipeline. So the first step of PaddyWeb is the data collection step because we cannot monitor the whole web of course. We made the decision to focus on Google News mainly, which means that PaddyWeb every few hours query Google News with different combinations of keywords like disease names, like symptoms like host names, in order to retrieve only potentially relevant articles for animal epidemiology. Google News provides us with some results that are in the form of RSS feeds, which basically is a structured list of articles with different information. And then for each article that is written, we have the step of the web page processing. So here is an example of such a collection process step. We have an example of an article from a website. And first of all, we have to retrieve some metadata for this article directly from the RSS feed. So we have the publication date, for instance, the title of the article, and of course a link to the actual web page of the article. Then we have to clean this web page. We have to visit it and extract the content that will be used inside PaddyWeb. So we have to clean everything, which means getting rid of everything that is not needed, like ads, navigation menus, images, videos, comments, and so on. Then we can obtain a clean text, which is here in French. As indeed we have different types of sources in PaddyWeb, different articles coming from different countries in various languages. And we made the choice of processing texts only in English within PaddyWeb in order to simplify the processing pipeline. So we have to translate the text into English. For that we have first a language section step. And then we apply machine learning, machine translation, sorry, using the Microsoft translator here in order to obtain a text that is clean, that is in English, and that is ready to be processed within PaddyWeb. Now that we are able to collect some articles and clean them and store them into our database, we have a data classification step, which mainly initially is here to decide whether an article is relevant or not. Indeed in PaddyWeb we collect a lot of articles daily. Every day we have a lot of articles collected, but not all of them are of epidemiological interest, despite the fact that we created, we made some quite precise queries to Google News. So we had to build an automatic classification system based on supervised machine learning methods. And so we did that and actually it's quite good at removing non-redeven articles in PaddyWeb. But another thing that is important to notice here is that the classification module here again is very customizable, is very generic, and it can be used for other tasks, other classification tasks. And actually this is what we do. So for instance we use the specification module for designing what is the topic of an article, is it about an outbreak declaration here, is it about the consequences of an outbreak and so on. We can also use it for sentence classification, which is we do not classify the whole article, but also we can classify each sentence in an article, for instance to tell the users what type of information is contained in a sentence. And in PaddyWeb we can also define other classification tasks. It's quite easy to do it within PaddyWeb. Another thing to notice here is that the users within PaddyWeb can rectify wrong classifications, which means that if a user sees a classification that is not correct within the PaddyWeb user interface he can rectify it and then every day the models, the classification models are retrained based on this new example so that they can get better and better over time. Now here is an example of an article that has been classified. So in the PaddyWeb user interface, so here you can see that this article, for instance, which is about an outbreak of Avian influenza has been classified as relevant first, but also has been classified as being about an outbreak declaration. And these type of things allow the users, for instance, to study only articles that are about outbreak declarations, for instance. Now another module that I would like to speak about today because it is very important in PaddyWeb is the information extraction module. So the goal of the information extraction in PaddyWeb is to automatically detect some pieces of information within the text that we collected, pieces of information that are of interest for animal epidemiology here so they can be location names, hosts, or species, disease names, case numbers, dates and so on. Here you have an example of a sentence where everything in blue is a type of information that we would like to be able to extract automatically. So in order to do so, we use a named entity recognition tool which is well known in the machine learning and text mining community, which is called SPACY. And actually SPACY already includes a model to recognize some generic entities in English, entities like names of locations, dates, names of organizations, names of people, things like that. But of course it is not designed to extract epidemiology information. So what we had to do here was first to build a dataset of around 500 articles which has been user labeled, each entity in these articles has been labeled. And then we have trained a specific model for SPACY for animal epidemiology in order to recognize entities that are specific to PaddyWeb topics, names of disease and so on. And this is what allows us in PaddyWeb user interface to have things like this where in the text of an article, the African swine fever has been automatically detected as being the name of disease. And we have also types of information that are automatically detected like this. And this is very important then for the outputs that the end users are going to use in the end. Okay, finally, for this presentation, I would like to speak about the last step of the processing pipeline of PaddyWeb which is about the outputs of PaddyWeb for the end users. As I said before, PaddyWeb has different types of users, very different types of users. So of course we have to provide different types of outputs for them. So we have different types of outputs that are here. We have, for instance, first of all, the PaddyWeb user interface which is a website. And by the way, it is publicly available. Then we have notification emails for helping the continuous monitoring of use collected by PaddyWeb. This is typically very useful for the French epidemic intelligence surveillance members who have to daily see what is happening on PaddyWeb. And then we have values exporting capabilities in PaddyWeb in order to extract different types of data generated by PaddyWeb. This is useful for data modules for computer scientists, for instance. And finally, we have also a simple JSON API for developers which can, for instance, if developers want to use some of the capabilities of PaddyWeb without actually using the PaddyWeb user interface, the website, if they want to do something else with the capabilities of PaddyWeb. So these are the different types of outputs. Here is a very quick example of email notifications to be able to vote. This is the kind of email that users can receive daily or weekly and they show the last article that have been collecting during the last 24 hours, for instance, by PaddyWeb with some basic information. Actually, they are organized by disease and by location. And another thing here is that the user can select the disease she or she wants to follow. So for instance, if I am only interested in ADN influenza, I can specify it with my subscription on the PaddyWeb user interface. So this is one type of output. And another type is about exporting capabilities of PaddyWeb. So here is an example of probably one of the most useful exporting formats for PaddyWeb, which is about exporting location-based extracted information from PaddyWeb, which is a simple file, CSV file, for instance, with every word being one location found by PaddyWeb in an article. And then we provide different types of information. So a lot of information about the location and they are coming from your name's API, actually, this type of information. So you have, for instance, the spatial coordinates of the location, which is very useful if you want to put them on the map. Then we have different, all the other types of information that have been extracted by PaddyWeb around the location mentioned. And also different types of information about the article it has been found in and things like that. So this is one type of export in PaddyWeb, but I like to say that yes, PaddyWeb is also able to export other types of data. So almost every type of data that is handled by PaddyWeb or that is generated by PaddyWeb can be exported between articles, extracted information, keywords, sentences, RSS feeds and so on. Now to conclude the talk, I would like to speak a bit about the current and future work because indeed PaddyWeb is becoming a very large platform and there is room for improvement in almost every step of the pipeline, of course. But we have identified a few ones that are more important, such as, for instance, improving the geolocation, which is a work about extracting spatial information from the text. We have to refine the geolocation used in notification emails, so sometimes the countries that we associate with the article is wrong and we have to fix that. And we need also some disambiguation of location entities, which is basically the problem of when I see the name of a city, for instance, maybe there are different places in the world with the same name and which one is it that is mentioned in the text. And we have in progress some work on visual analytics of detected events with some of other wood partners. For instance, there is work on space time visualization in progress of detected outbreaks in collaboration with partners in France. There is some work on risk mapping using extracted outbreak data in collaboration with partners in Belgium. And of course, also, there are some work on improving some specific modules in Paddyweb or creating new modules in Paddyweb. And there is a lot of research work done by, for instance, by PhD students or postdoc researchers here. And of course, one part of my job as a developer is to know how to integrate them to Paddyweb while maintaining the main ideas of the platform. So thank you very much for your attention. Here you have some of the members of the Paddyweb team. And I'd like to say that if you want to use Paddyweb, if you want to try it, or if you have some ideas to collaborate with the Paddyweb team, you can contact us at the given email address here. Thank you. Let me kick off first question. How many interests you have at the moment? Okay, actually, I won't have the exact numbers, current numbers, because it evolves quite quickly. But I think we have a bit less than 400,000 articles in the database. Because it started collecting articles a few years ago, in 2016, I think maybe. And not all of them are relevant. But okay, it gives an idea of the size of the database actually. And for each article, we have hundreds of extracted pieces of information and so on. So the database is quite big, actually. And geographically, have you tried, because I see that you geocode, using geonames, geocode geography, have you tried making a map to see the density of this? Well, actually, not directly us. In Paddyweb, there is no map visualization, but there is some discussion actually to integrate some map visualization within Paddyweb user interface currently. But still, there are another partner, a good partner, at Lyon in Montpellier in France. And they have been doing somewhere to actually show a map, but not only a map, but also a time-related map. A time slider to see. Yes, and a map in order to visualize the outbreaks and so on. So it can be done with a type of exporting ability that I showed before from Paddyweb. So the moment you don't have it, right? You don't have that. Not within Paddyweb. It's not a part of Paddyweb. It's something that has been done outside Paddyweb using the data generated by Paddyweb. And do you have you tried putting, testing the Paddyweb, how it reacts to noise? And I know that today there's a lot of work done on detecting inflated information or spam or something. So how do you work? Or is it then done on the Google side because you use the Google news, right? Actually, what we noticed quite quickly when we made the first prototype for Paddyweb was that even if we have very precise queries to Google news with some very specific keywords, disease names and so on, even in that case, there is a majority of articles that are collected that are not really even at all. And the way we dealt with that problem was to introduce a classification module. And the fact is that there is this classification module, which is able, because we provided it with some labeled data, some examples of redevent and non-relevant articles. It's able now to make this work of removing everything that is not relevant. And actually, it's quite good at doing that because if I remember where we have something like 95% accuracy score for descending whether an article is relevant or not. So this is something that works pretty good for designing whether an article is relevant. But then we have also the noise of everything that comes behind, which is mainly the information extraction module. Of course, when we sometimes we extract information that is not correct. Sometimes the location names are not the good ones, for instance, things like this. So of course, there is the amount of things that we extract, so it's a bit of a part of the problem because we can see that if a lot of things say that something happens here, then probably there is something happening here. And if it is completely an outlier, maybe it's wrong. But we don't have for now a way to really eliminate wrongly identified expected information. Okay. We have a lot of questions in the chat. So maybe I can read them out to you. Let's start from the first one from Elena. Can PadiWeb use other languages besides English? Like for example, Serbian, since we have also Serbian partners. And how could it be used from the colleagues in other countries? Okay. Well, actually, yes, PadiWeb has really been designed with generosity in mind, which means also being able to work with different languages, other than English, for instance. As you saw, we have a translation step in our pipeline, which aims at translating everything that is not in English into English so that we can work with it. But it's also completely possible in PadiWeb to just work with another language and decide that we are working in something different than English, or we're just not translated everything. But in that case, the main difficulty actually is not on the PadiWeb site in terms of code or development. It's more in the terms of how to feed the machine learning models that we use. It means that if you want now to use Serbian, then you have to build classification models for Serbian for deciding whether it's relevant or not and so on. So this is a lot of work to prepare the machine learning models in order to be able to use a language. And this is mainly why we decided to just use one language, because otherwise it's too much work. Hi. Can you give a concrete example of use of PadiWeb developers? Yes. Well, actually a very simple example, quite trivial one actually was a few years ago, I think. Someone wanted to use the abilities of PadiWeb, the ability to collect data and to index them and do a classification on data, but they just not wanted to use PadiWeb user interface. They wanted to have their own module doing that within their own user interface for another type of project, for instance. And in that case, they were using the PadiWeb JSON API in order to do that. So they could use some of the features of PadiWeb, but completely outside the, and some of the database of PadiWeb as well, completely outside the user interface. That was one example of usage. Okay, thank you. And do you know how many people subscribe to this notification system and if you have any idea who they are? Well, actually, I don't know how many there are today. I know that each time there is a presentation, there are some people asking for a user account and the kind of course, but I know that, okay, typically the notification emails was designed for people from the French Epidemic Intelligence Surveillance Team, because every day there were almost every day their work is to completely go to the computer and search for what has been happening in the last two days, for instance, what are the news, et cetera, that are relevant to our problem. And for them, it's very useful to have this type of notification emails directly in their emails every day so that they don't have to search by themselves. They don't have to go to PadiWeb website and do a search on it. They receive the information directly. And one last question from the chat. How do you deal with fake news? Okay, this is not a problem that we specifically addressed. So well, I don't have in mind an example of some fake news that we saw appearing in PadiWeb that could add some noise, for instance, to the data. I know there are people in the audience from the PadiWeb team have an ID, but for now we're just not addressing it. We haven't seen the need for it for now. Okay, thanks. And then I see one hand raised from Frank. Frank, would you like to ask a question? Yeah. Yeah. Thanks, Julian. And thanks, everyone. It's really a very nice platform. I just have one or two questions. So firstly, on the location, sorry, my name is Frank from ISIDM. So on the location part, I mean, the geocoding of the data collected at what spatial resolution is that done? I mean, at what geographic scale? Is it at country level, at state level, or at sub-state level? And then for places whose coordinates or who don't have good geocodes, right? How do you do to manage that? I'm based in Nigeria for now, and a couple of places in Nigeria can't be geocoded. So how do you manage such? Then on the API, is it tied to the subscriptions? So does it mean developers have access to the API endpoints for the parts they have subscribed to on your platform? Or are the APIs, does the API expose any needed endpoint from a developer? And finally, how often do you bring together the team of experts, perhaps your modulators, to review the classification results to be sure of what have been classified correctly after the initial 500 classified articles? So those are these few questions. Thanks. Okay. I tried to remember everything as much as possible. So I think the first question was about geocoding and using geonames, and sometimes it was about the granularity of the data from geonames, maybe. And about that, okay, we don't have any pre-specified granularity that we want to use. We just find names of locations in text, and then we query geonames to know whether there is something that has this name and this geonames entity. So we don't have any constraint on the granularity, and this is geonames that give us, we find a name, and geonames give us information, and such as what type of entity it is, what type of land of special entity it is. And this is the kind of information that you have here on the exported data that is completely provided by geonames. We don't filter or process this information here. And if something is not present in geonames, then unfortunately, we just know that potentially it is a location name because we extract it as a location name, but we don't have any information about it because geonames won't recognize it. So that can happen a lot, and that can happen. And sometimes it shows that the information extraction made a mistake, said that something was a location name, but it is not. That can happen. And sometimes maybe it's just that geonames don't know it. Okay, so the next question I think was about the JSON API. The JSON API for now is very simple. And in order to actually it was designed very specifically for a given project. And the question of whether it's publicly available for other developers and so on. Actually, it has to be really discussed with a party web team specifically for each case because for now we have something that is that protects the API so that it cannot be used by other people if they are not being authorized to do so. So you would have to send an email to the party web contact email actually. And the last question I think was about the classification, how we make sure that the classifications are correct or not, I think. And the fact is that in, okay, I try to come back here. Okay, one thing is that if you're a signed in user on party web, each time you see an article, you have the list of the classification labels that have been assigned to each article. So here, for instance, in my example, it is a relevant article and it's an outbreak declaration according to the automatic classification module. Now if you see that there is a mistake here, a user can just click on the button and then it will take precedence and says, okay, this one has been wrongly classified. And now we consider it as the user said. And then we have when we train a model automatically every day, actually, we retrain the model, the classification models. We have a cross validation process that is here to validate the fact that the classification model according to the examples we have is doing good work or not. And we have one more, a couple of questions still from the chat if from Isabelle, if party web could be used for videos or spoken news. So not only text. Well, this is a very interesting question actually because there is a current discussion currently on working with YouTube videos because indeed they can be a different type of source. Currently, there is no way that we're going to do something that is able to recognize from the voice of something speaking to convert it into something that we can work with. But on YouTube, there is something very interesting, which is that there are transcriptions, transcripts for the videos that can be user written or they can be automatically generated. And we plan maybe to use these kind of transcripts in order to work with video interaction. Can I also one more question Julia about the data size? What would be a dream size where you think you could do like also some like very reliable modeling? You said you have 400,000 articles. What would be a dream size where you think that like, I mean, I don't know how much Google News produces per year articles or would you like to have like a, you know, I don't know, 40 million articles that you can analyze, including video. So what's what would be a dream size? Yes. Well, actually, it really depends on each model has its own requirements in terms of size of data in order to become reliable. For instance, for if I take the example of the classification models, okay, the classification models for redevance, is it redevanced or not? Okay, there are only two possibilities. Texts are usually very different from their redevanced or not redevanced. Okay, we've been able to build a model that is quite effective with quite with not a lot of data, but for other types of specifications, for instance, it's much more difficult, but it's also much more difficult to build data sets, user labeled data sets that will be really useful. So well, I'd say I would like to have as much data as possible, of course, but it means that actually the actual work, then the actual effort is on the experts that have to manually label the data. And for instance, I have an example of the model that we built for the information extraction module. It was about 500 articles, which is not that huge in terms of machine learning data set. But Elena Afebska actually did the work of labeling each entity within each text, and it was a huge, huge task. So yes, we have to find the right balance between visibility for users, for experts, and usefulness for the models. So while there is not one right answer to that, I'm afraid. And the historic news articles, like most of your articles, like articles that Google News harvest is just digital news articles, but they're also historic articles. And I know Google digitized the books, right? And the European libraries digitized the books. What about the historic news articles going like 20, 30 years back? Would this be something interesting for Padiweb also? Well, I guess for the epidemiologist, it could be something of interest to see how things evolved over time. For the developer point of view, from my point of view, yes, it could be feasible if we can have an access at some time in terms of raw text that we can process. And this is the main, the most important thing for us is to be able to convert everything into some text that we can process. Initially Padiweb comes from collaboration between some epidemiologists from the French Epidemic Intelligence team and some computer scientists. And while the epidemiologists have to find potential threats to animal health by monitoring some unofficial news articles on the web and some computer scientists, then so that maybe it was possible to automate this part of the process by collecting articles, by processing them automatically. And the idea was to build an automated pipeline, exploiting some works in machine learning in order to provide something that we felt would really help the epidemiologists in their daily tasks. So that was the main idea behind the platform, Padiweb, and that was how the first prototype for Padiweb was developed a few years ago. And since then, a lot of features have been added or improved in Padiweb, such as the article classification module, for instance. And actually also by discussing with other research groups by other people, we realized that this type of platform like Padiweb would be really useful to a lot of domains of applications, of applications. So monitoring news or documents and the automaticity, having an idea of the information they contain, this is something that had to be done for multiple applications, actually not only for animal epidemiology. Well, yes, there are very different channels of distribution for the information that has been extracted by Padiweb. And it is because we have different types of users and they expect different things from Padiweb. We have different types of those channels of distribution. So the first one is about domain experts, so epidemiologists, for instance, from the French Epidemic Surveillance Team. And they are interested in following almost daily what is happening so that they can have a feel of the current disease outbreaks and also they need to be able to react quickly if something of PachiBrain interest happens. And another thing is that they don't want to go to Padiweb every day to research the new article. So that's why we have the notifications, notification emails in Padiweb that are here to provide them directly with some of the information that has been collected lately during the last 24 hours, for instance. And then if something of interest appears in the notification email, they can still go to Padiweb and then do some research about the thing that happened. And then we have a completely different set of users that are data scientists, that are computer scientists, that are developers. And for them, actually, they are more interested in the data collected by Padiweb and extracted by Padiweb, but they want to be able to analyze the data, process them differently than what we do in Padiweb. So for instance, one example is about building map visualizations given the break data that Padiweb has extracted. And in this case, we provide to users, these type of users, with exporting files in different formats that can be processed by them. So these are the main ways to distribute information that is collected by Padiweb. Well, yes, of course, there are lots of plans to improve Padiweb because it's becoming a very large, very big platform. And well, for almost every step that is presented in Padiweb, there is a way to improve it. So there are maybe different levels of improvement that we can work on. One is to improve each module individually. So maybe improve the information extraction module, maybe improve the collection process and so on. One example of this type of improvements is the geolocation geocoding part of Padiweb, which is about extracting spatial information from the articles. And here we identified some weaknesses in our process. So currently there is some, for instance, some PhD students who are currently working on this problem trying to find different ways to do it. And later on, maybe we will integrate this type of work into Padiweb in order to improve it. And then I think there are another level of improvement, which is very important here. It's about improving the data sets, the data that we use in order to feed the different machine learning models that we have in Padiweb. So for instance, the classification models, they really rely on the type of data that we gave them as examples. And because it is difficult to ask experts to do that a lot, it's a lot of efforts from the experts to manually build data sets to do something that is still very important. And I think this is one of the main things that we really need to do in the future to keep Padiweb pretty good.
Julien Rabatel is a freelance developer with a PhD in Computer Science. Nowadays, he uses his development and research experience to participate as a freelancer in various projects, frequently involving public institutions such as CIRAD or universities. PADI-web is an online tool that collects and processes documents from the Web. Its goal is to allow daily surveillance in epidemiology, by exploiting unofficial sources such as local news. This talk will describe how PADI-web is working, and what it can provide to the end-user. Also, we will take a look at the future of PADI-web and how we can address its current limitations.
10.5446/56922 (DOI)
Hello everybody, welcome. My name is Andreas Kimnate and I will talk about the two words using regular Linux on ebook readers, especially with Kobo and Tolino readers. First I will tell you a bit about me, the motivation behind all that and about the devices. Then I will go from down to up system, from bootloader to kernel and then the user space and share changes. I will also show my favorite use case maps and how the display feels like. Then some videos. So if I am not taking something, I am often doing some outdoor stuff. In summer vacation I usually do long bicycle trips, often in sparsely populated areas in northern Europe. So I am a bit on my own there. Of course I have some Linux mobile device with me. I had an PDA in former times and also the Moku. So the motivation behind all my work is to solve my outdoor problems. So it can only be done in my spare time. The ebook readers I am involved with are the Kobo Clara HDs, my first one. And also I talked to someone, persuaded him to buy the Stein 3 because we both thought they are equal but that was not the case. They are not so equal. So I helped them with porting it and I found a used GTO2, a used Shine 2. And I imported it also there. And for comparison I get my hands on some defective H2O and for some very condensing atmospheres I bought Vision 5 for example like the showing star maps. So what do I want basically? A bit of computer in the middle of nowhere. So I prefer to have little power consumption instead of sick batteries and it should be the data I'd readable. And having it with me should not prevent me from relaxing too much. It's a powerful enough to look at for example the coronavirus site all the time. That's a bit more relaxing. And also a bigger display. It's helpful if you do some planning and not just going from simply going from A to B. So some nemic planning and you can and also like I'd actually be able to have some information stored with me. And of course doing some text work is always nice to be able to do that. And contrary to the other devices I had before, here I can find something similar kind of next corner and so I feel that my work experience is not lost if I shoot another device. So it's basically master one device, master next one easily. So about the devices, there's a Tolino Alliance consisting of mainly German bookstores to compete with Amazon kindle more easily. The brand Tolino belongs to Gutenkobo, so you find similar devices. They use most than ordinary file formats, so also the original software is of quite some use. So you have often several pairs of almost identical devices. There's just a different pin compatible SOC. So it's the IMX6 Solide Tolino devices and the IMX6 SLL and Kovu devices. And it almost identical was Pitfall for me, so I mentioned that with the Shine 3. And the vendor software on the Tolino, it's Android based system on a 3.0 based kernel. But there's also some quite new boot things on Kovu. You have something a bit newer. But this all asked to be replaced by something newer, so you can use ordinary stuff on it. More newer stuff. So how can they easily be hacked? What can you do with them? The non-waterproof device have internal microSD slots, contains a world operating system and a bootloader. So can be replaced by something big and you can have backups. The waterproof device usually have EMMC. It's a bit more tricky. You can break it. The space is a bit mislimited. Probably not. You put some Wikipedia on it. We all have some serial console, easily accessible and also some fast boot. At least the Kovu bootloader allows to use USB mass storage. And there's some I'm in specific recovery over USB. If no bootloader is found, like no microSD in or in the EMMC devices, the device is zeroed where it does not look like a bootloader. And you can enter this mode also by writing some magic values into some registers and then reset. There was once an 8 gigabyte card in here. I've changed it, replaced it. And also the console is clearly marked. And interestingly, next set there's another UART. So you can attach your favorite peripheral to it if you want. So how do I debug these things? I remove the back cover and buy some sleep cover for it and put a hole in there. So I have some debug cover and I'm accessing this band with some poro pins. Here also with some Lutus adapter so I can access it wireless. Yep, so no wires attached there. Was something looking ugly. So now let's talk about UBoot, the bootloader. The vendor UBoot has some special behavior. The bootM and bootZ commands are modified. They load blocks from hidden partitions and also append things to the kernel arguments, also the keyword required sometimes. And if no kernel is loaded yet using a special command, they tend to override things. So if you are just loading the kernel from the desk, you have to take care of it to some tricks. And for some reason deep sleep on mainline kernels with to combination with Tolino vendor UBoot it doesn't work reliably for whatever reason. There's a fork of the... So I decided to replace it by something new. So downside is that booting the vendor systems is a bit more tricky because you need to load this hidden partitions and pass the addresses to the kernel command line. I did not manage to boot the Tolino vendor kernel visit yet. Well, but contrary, buttons and LED is easily accessible for boot options. Like booting the dual booting the original systems or doing some recovery things. And they're just more easily configurable. Big pen we tested using that USB recovery boot. So you can test a run before writing it to flash. So you can avoid breaking your devices. So now let's talk about the kernel. What's already in there. The device is for the devices I'm involved with. And the Pimic for battery realtime block, it's upstreamed for touchscreen. The touchscreen for the Shine 2 HD is upstream already for a longer time. And the Wi-Fi, some devices have a Broadcom based Wi-Fi with drivers upstream. So what's missing, there are patches for the touchscreen already on the mailing list. That's a review and it covers a lot of devices. As of the devices mentioned here, it's also the remarkable tool. So the Pine node and then there is the real tech chip that's only in out of 3 driver. So it's only in out of 3 driver, seems to be no upstream work active. The biggest construction site is the display. It's not mainlined at all. But I have a DIM variant of the driver in preparation. I think I will send out an IFC patch series soon. To use the display you also need a driver for some EPD Pimics. And the most recent code for this is online my shithub repository. So you can check out and test. I have documented some quirks like how to convince fastboot to take that kernel and also the individual status for several devices. So now out the display. What's the most interesting part? The third clue is we need a list of non-overlapping damaged rectangles. Pass them through. So it's first for the convert them to gray. And several EboGridas have rotated displays. We need to rotate it here. Also on the IMAX 6SL, the PHP does faith form processing. It's not well documented. But the things go to the display controller and then to the display. My DRM driver does not use the display yet because the implementation mainline is too limited. It's too limited. And the EPDC needs some waveform blob to operate. So what are the challenges here? You need to get these rectangles. So I get the API and provide something which is already used for other devices like command mode, DSI displays. So the user space is prepared for it. Then the vendor kernel uses frame buffer devices and special IOControls which need special programs to handle. So if you use ordinary programs, you will just need to track damage events of XORG either by patching the frame buffer driver or by running a special program on top of XORG and catch the damage faster or you have other mechanisms to recognize change memory areas. But that usually results in full horizontal stripes to be refreshed. And especially that's ugly if two small rectangles damage which distance. Stabilize rebrand forms, the display gets drienson. I have quite strange code in my driver. make things work. That's all a bit disgusting. These are stored in didn't partitions and I'm copying to the firmware directory and it's loaded. So now let's talk about distributions. There's post marketOS which supports the codeclarer hd and the shine tool. So you clone your SD card, install post marketOS on it. The first boot the waveform is copied from the hdn partition and then there is also a layer on top of open embedded for the codeclarer hd. There's a LittoxOS. There are some special Ipopia software distributions. They are using the Endocals and so using this special IOControl things. So now to my favorite use case, Maps navigation. I'm using some special USB cable to attach power source and shivers receiver. Power source is my bicycle hub dynamo. We can also use the secondary UART. I had the former software already in place so I used that. About Maps, the standard OpenStreetMap style is hardly readable. Some topographic maps from Swedish and German state agencies seem to work surprisingly well. There's also Navit. So this is Navit with OSM data and routing in sunlight. Quite readable. There's the OSM standard and I improve them a little bit by exchanging street borders. The grey with black. But there's still a lot of work to do here. And then Control is a map from the Swedish state agency. It's easily readable and under a nice license. Similar experience with not so free maps from Germany. So now let's look at some videos. How the display looks like. So I'm routing the system here. This is the Shine 2. This is my DIM driver. Now it's starting. You see the cursor. You can move it. Okay. Now this is post market with XFCE4. I'm unlocking my screen. Yes. You see some shadows from a display. Now I have my terminal. Now it's time for your questions. That's it. We are live. Andreas, thank you very much for the talk. So we can get started with the questions. We have quite a few. Starting from the first one. Which of the ebook readers did you find most pleasant to hack on and why? I mostly prefer the ebook readers first in microSD card because I can easily back up the whole system and cannot damage too much things. So I can. So less chances to break them. My favorite is still the Kuroklara HD. And also there's a once with some, this is second Siri port. Also nice for hacking. You can attach something to it. Just more possibilities and also getting a big microSD card. You have seen in my pictures I have some 128 there. That's also helpful to keep also the original system there for it while booting something. Thank you. Are you aware of any work for main lining Linux on the Amazon Kindle devices? I'm not aware of anything there. I personally don't. I've not looked too much into that because that was too much fields to look to even support. Okay. So next one was a maps. Maps for navigation. Well, it works. Well depends on the style. So you have quite a quite different experience. So standard OSM style. It's hardly usable, but never itself. It works. And I've successfully used it. Next one. Which of see. So I lost your video stream for a while. I don't know if other people are seeing you. So I'll move on. Another question related to the previous one. Which of the book readers do you find most pleasant to use and why? For all this auto stuff and stuff I would prefer the more waterproof ones. But that's clearly contradicts this. This ability so I would just for just with just about reading ebooks I would prefer the waterproof ones and so you don't need to be that careful with this water and that things out. I'm out. Thank you. Then moving on. Do you think that OEM will become more friendly towards the first projects in the future? Well, the ebook readers at least you can ask for sources and say respond. But it does not feel too much about contribution. So when I reported Kernel errors or something, it somehow got fixed after some year and there was no response. But what I think if there is more stuff in my main line, especially the EPC driver, maybe, but on the other hand, it's often I think they are a bit competitive with the issues there. So they don't like to they may they must start to adapt their software that they have hard-coded past to better into the better battery capacity, which includes the ship name and then patch the kernel so that regardless of what ship it's really is, just renamed in the kernel. So somehow I have my doubts there. Thank you. And the next question, since Kobo and Torino share almost identical hardware, but the former on Linux and the latter Android, is there any chance of further more interoperability and cross development? If it's about the different SOTs, then even the other the reboot of the other device doesn't find the MMC, so it already stops there. If the devices are similar, then devices are similar and have the same SOT, then there are some reports that the Kobo firmware boots on Torino firmware and vice versa, with some limited functionality, had a very fully working people have tried that. But if it's all this about the different SOTs, then there's a problem that probably either since they then they are different kernel versions, then the user space would have to be adapted. And so it's not probably not so easy. And getting the vendor kernel on the other processor to work, that will be a nightmare. Since you all three 3.0 kernel used on the SL devices are different, don't use device tree, and these old kernels do not support the SL processor, so that will be a nightmare. Thank you. We have 30 seconds left for maybe one last question. How big are the EMMCs on these devices? Once I get in touch with it, it was eight gigabytes. But on the newest ones, I've seen something about 32, but eight gigabytes is common. Well, on some older, for a bit.
Most Kobo/Tolino readers offer a well marked console port and often a second UART. If they are not water resistant, they offer an internal µSD card slot containing the whole operating system and bootloader so that sounds like an invitation to do something interesting with them besides just reading books. Especially in prolonged outdoor activities, the display and their low power consumption have their merits. Hardware is quite similar, so you also have chances to get a replacement next corner. Several devicetrees and also some drivers made their way into mainline linux now, on others upstreaming work is in progress. Support is starting to find its way into mobile linux distributions like postmarketOS and graphics start to work with standard APIs. In this talk I talk about my experiences, especially the current state of support in mainline linux, what is missing and what are the challenges. I will also talk about requirements for the graphics userspace and shortly present my favorite use case: displaying maps.
10.5446/56923 (DOI)
Hello everybody. Thank you very much for having me for this year's first time. I am Martin and I'm a kernel developer. I'm working with Purism on the LibreM5 phone and in this little talk I want to give you kind of a feeling about how we do kernel development, what we did over the last months and what we are planning to do. I think it was Daniel Standback from the KURL project who used to joke that there are many successful projects but kind of you know when your project really won is when it's being used as a verb and when we talk about mainlining this always reminds me of that and I guess in that sense the the mainline kernel tree really won. Okay here on the left side of course that's the LibreM5 phone. I personally use it actually for a few months now as my only phone and it's a really fun device. It's actually funny according to phone because it's really similar actually to my workstation technically. So what we have on source.puri.sm there is a GitLab instance and we host the LibreM5 kernel tree there which is kind of a development tree against the mainline and what we do is we use the GitLab workflow to create merge requests for changes we want against our tree but these days it even happens that we do changes we test of course to development in the usual way but to changes against the mainline directly and since we use stable kernels to base our tree on we can just basically do changes without ever being inside of our out of tree patch set. So what we do with our patch set we kind of mimic the upstream development cycles so when roughly when a merge window for a new kernel closes we rebase our patch set on top of an RC kernel and use basically the RC stabilization phase to debug and stabilize our patch set during that time as well and by the time that Linux tags a kernel stable we make sure that our patch set on top of that again rebased is ready to be used by our users and we release it. So in the end of course the goal is for our patch set that's not in the mainline to become smaller and in the end it will be something like the Debian R64 kernel at least as close as possible to that so what we did in 2020 there are more things and I forget to mention things it's just a few pieces of work that came to my mind mostly Gido of our kernel team wrote a driver for the MEPDSI host controller for IMX8 MQ we did a lot of device re-descriptions that's important to us to maintain them inside of the mainline kernel to support the device directly I wrote this edge driver for the Xorometer and Chiroscope driver we focused on supporting the panels and runtime power management is very important in our case and to this day basically that's all we do if you will for example if you would turn off the screen on the phone right now nothing special happens actually it's just that at this point that the display stack doesn't need to be powered anymore and that results in the DRAM frequency to be lower it can at least run lower and that's a lot of power but Linux runs currently as it always runs it just powers down devices in 21 to give you some numbers we used a 5.7 base tree with roughly 40,000 lines of code incitations in our downstream battery we did quite some work on the USB type-c side the battery charger and also towards the battery the battery controller but also the type-c power delivery and stuff like that and what we then did when we were on a 5.9 base tree we added another 10,000 lines of code roughly in order to support the cameras we did that in order to have it quickly and we used drivers from the NXP BSP kernel tree and about the course of 21 we mainlined almost all of it and actually today we use a 5.16 base kernel that has everything we need except for one driver that's the backside camera sensor driver that's the only thing that's missing that's at the very edge and that's a very good position to be in actually from a maintenance point of view what we did is we improved power management for SD card readers in general so that these event polling mechanism that distributions currently use is technically not needed anymore and runtime power management can just be enabled instead and the last line shows that really the bulk most of our tree out of tree additions is really the Wi-Fi driver that's kind of our blind spot if you will currently because the worst thing is that there is a mainline driver that's supposed to work with the card we ship we just haven't taken the time to use it and to improve it to be as good as the driver we include we really need to do it I put up this slide because I really I want to thank kind of the wider development community that helped me personally to do my work not only in mainlining but really without it Lauren for example from the camera development lip camera or Sean from the arm64 the last three parts many many other people it doesn't really make sense to to start list listing people but I myself hope to be a selfful to others as well and of course the the cuddle team that I work with with Angus and Guido and Sebastian or an Anthorota is extremely extremely helpful in 22 I guess we will do quite some work in the DRM subsystem where we have drivers that are not yet in the mainline that we need for the HDMI and display or display port output I think there's still work needed for the touchscreen driver we of course the mentioned Wi-Fi driver is really something we should change a tiny bit actually is the the backside LED that can be used as the flash for the camera and needs to be the Linux to support and I have been working on system suspend and that's actually something that is going to be ready for users to test very soon right now and that's going to I think that's going to start an interesting discussion about how to really use it and what policies to create and how how to best make use of system suspend for a phone I put up this slide because I really spent a lot of my time on the camera stack last year and one driver I wrote is the sensor driver for the selfie cam in this case and I have a question basically for you or for the session later later on because I have the data sheet for this sensor and I could have never written this driver based on a data sheet I only could do so because there is an Android device out there with the same sensor and therefore there's a kernel out there with the sources and the reason is that there are literally hundreds of registers that are not documented at all being written to in order to support the resolution modes and stuff like that and that becomes a problem because as soon as I wanted to change this or support different resolution modes or stuff like that I have no real basis to to work against and really if I know that there are many sensor drivers kind of that are in this situation and Android kernel trees often hosted somewhere on GitHub and are not really good at preserving git history so I really would appreciate any insight in how they are written actually yeah so really that's the question yeah when we talk about mainlining that's just one thing that comes to my mind that's going to be especially hard to do because the IMX8MQ SOC has a hardware bug that simplified speaking CPU cores cannot be woken up from the deepest sleep state it's a problem where we have a software workaround for but that's basically very ugly and not really suitable for the mainline kernel and that's going to be one of the last pieces I think we will try to support in mainland but that's going to be hard and lastly let me think about mainlining a bit in general and what I want to say is let's not see mainlining as a goal and I think it's important to have the mainline tree and really ties our community together and make us write good code and and need to communicate and and a related word of community of course is communication and we should really try to hold that up high if you will and try to communicate in a good and in a nice way and respond in time and explain openly why we do things why we want things and mainlining is one piece one kind of natural piece of our work as part of our community and but it's never where development stops for example it's often the opposite actually and I know we are sometimes really busy in our jobs and want to get to do lists the tick off or something but it I just want to kind of encourage to try to sometimes take a step back and even look at what's what might be good for the project itself not really for your own task you need to get done but whatever be it wire driver is in staging and a question everything but of course listen to other people question everything and really try to improve our project and community so and with that I think I'm at the end and yes I have a few minutes still for questions later on in case of course if you own the phone there are no secrets at all our GitLab instance is there I usually write a blog post about about our progress we do for the for the mainline stuff we even accept the patches at a kernel at puri.sm if that's what you want to do that's my email address down there really it's really a fun project and that's really something you should remember to have fun and with that thank you very much and I'm looking forward to talking to you thank you you okay thank you very much Martin for your interesting talk there's a couple of questions in the channel I'm going to start with the topmost month from Dev Artiset and what are the most enjoyable annoying subsystems patch sets you worked with over time at YA? Oh that's hard to say in general I have really good experience with working with maintainers in subsystems especially I don't really want to to mention specific ones actually the IAO subsystem was very helpful right now I'm starting to dig around in the sound subsystem and people are very helpful and I have really not really bad experiences to be honest the only thing that happened of course is the patch sets are lying around and nobody looks at it but that's a problem that will not go away so soon okay yeah that's that's true but I think from from what I've seen we've there was pretty good response to most of the patches I've seen flying around which which which is nice there's another question from Dev Artiset you mentioned suspends how much work is still needed on the kernel user space side I think kind of like until it can be used daily and out of the box so on the kernel side I've made sure that the suspended resumed path technically work so at least on my development device you can suspend the phone and resume it pressing either a power button or use basically any wake-up line that's configured and that should be of course also be the modem receiving a call or SMS and while that works on my device I've heard at least one or two reports that there are issues one issue is when you're not using system D so that's not a pure isn't specific issue but I'd say the kernel is ready for people to test this and we will not immediately configure it to be the default so we will not immediately say let's put the phone to suspend when it's not being used but I want to get there and I think we are at the point where we can basically ask everybody to to try it out okay and any comments on the user space side that was part of the question the user space side is something I only started to think about it'll be interesting I think the first step there's no there's definitely work needed to be done as far as I know but how a how a policy could look like in the first step I think should be very simple like this basically instead of doing plain suspend do something like RTC wake and wake up after a few minutes and see whether something happened see whether any notification comes in or not and if not suspend again and that's a very simple thing and that's I think something we should work against in the first step okay yeah and I am recently in the first channel I saw this work done by other people basically on that pattern so that's that sounds great there's a question from Q it seems much time has been spent on improving the power management how does the battery how long does the battery last now maybe you can say something about kind of like this without suspend which is kind of like the default at the moment and maybe what you would expect is a very first step with suspend I think you you can say something about that too so for me it's it highly depends on how you use the phone so right now if you would work on the phone do something on the phone you the battery will drain relatively quickly from for me personally I use the phone and without charging it I'd say I use it half a day that's something that's really my use case so it it lies around for an hour or two I do stuff with it for an hour or two and I think half a day is something that it lasts easily and if it's lying around all the time it probably would last more than a day or or baby maybe one day and for the system suspend case I think what we can do expect is something like when the phone is lying around and not being used and we can roughly roughly double the time the device keeps keeps a power in that state so it won't change anything when you use the phone of course but it will definitely improve the being available and powered time yeah sure that's it can only improve the standby time that's and but I think we also have come maybe some pending optimizations in user space for for the running time that's another question you mentioned you have been using pure method daily driver what did you use before and how is the experience is there a place we have perhaps can read up about your experience many important for a phone things so basically kind of like what are you missing on your device what is the experience towards your old phone and what was it well I've been using an Android device so with a line of choice image without any Google stuff and the experience on the LibreM5 is I don't want to say surprisingly good but it's it's nice you know it's in part a psychological thing right now and it's really good feeling not to use tons of non-free software all the time and so my personal use cases work I personally don't yet use for example the signal messaging but technically that's something that should work as well what I personally miss is probably good GPS GNSS and navigation system probably that's almost the only thing I miss I mean I miss one thing but that's the user space unknown issue that's these called up and cut off support okay but I think you should be able to do that actually I have had Carter but you can't configure it but I think it's time to wrap it up we have 20 seconds left thanks again Martin for your talk thank you and I think the room is going to open up for others in a couple of minutes and you have five seconds for any last words no thanks for listening my pleasure to be at first time
In this little talk I want to summarize what we've done, describe how we do it and put it into perspective a bit. I'll outline rough future plans and of course encourage to participate in case you own that phone.
10.5446/56924 (DOI)
Hi, I'm Nikita and I'm taking part in maintaining various community projects related to Snapdragon 410 System & Chip. This includes close to Mainland Linux kernel as well as various other projects and tools that I want to tell you about today. Snapdragon 410 or MSM8916 is a System & Chip released in 2014 by Qualcomm. It is the first 64-bit system and can be found in a variety of devices like midrange bonds and tablets released around 2015. The reason the System & Chip is interesting, however, is that there is a version of the chip called Epic U86 in E, which is the core that targets IoT applications. One of the devices that is based on the core is Dragonboar 410c, a single-board computer that, like other Dragonboats, was brought into upstream Linux by Lenaro and Qualcomm. It does intend there. For the Dragonboar 410c, Qualcomm has released lots and lots of documentation, including things like technical reference manual, a 3000-page PDF file that describes many aspects of the system. To my knowledge, there is no other System & Chip from Qualcomm with such detailed documentation available. Using all of that previous focus or base, over a couple of years we were able to bring up close to Mainland Linux kernel on more than 25 devices, with over 10 of those having most of the features such as mobile data and phone calls working to the level of a daily usable device. I know certainly from the questions we occasionally get that at least a couple of people rely on those devices as their primary phones, and they personally use a Snapdragon 410-based tablet instead of a laptop for almost a year now. I think some of you might be already familiar with the fact that smartphones usually come with a heavily-modified vendor kernel, and that the process of bringing upstream Linux, the one from Kernel.org, is often called Mainlining. There was a deeper talk by Luca about this process yesterday. But if very shortly, Mainlining is a process of porting or rewriting the missing drivers and using a device tree for the device in question. The device tree is a machine-readable description of all the hardware in the device that is used by the kernel to load and configure the all-required drivers. My plan today, however, is not to focus on the kernel side of things, as there are many other parts to the Mainlining related infrastructure that I think also deserve some attention. If you want to hear about Linux, you might be interested to see Caleb's talk later today. They are going to describe their story of bringing up Mainline Linux on some Snapdragon 845-based devices. The first problem that we have in mind is that the fact that vendors like to customize the bootloader. You might be familiar with something Odin-Mode that replaces Fastboot. This means that the very convenient features like Fastboot boot are missing. And even if the devices use a more generic bootloader with Fastboot, it might also have other quirks, like implementing Fastboot in a way that doesn't fully align with the current Fastboot implementation. In particular, on some devices flashing images must be done with a special raw-flog, which is not a usual behavior. Fastboot in MSM8916 is implemented in a boot, which is the last bootloader in the chain. We can replace it due to the fact that its digital signature is verified on boot, and even if we did that, it usually provides some information that we want to know. But do we really need to replace it? In fact, we don't. We actually can put a shimbootloader into the boot image instead of the Linux kernel and do what we want there. The stock bootloader will then load as if it was the Linux kernel. But what we can use for that bootloader? There are multiple bootloaders that can come to mind when thinking about something like this. Uboot would be an obvious first thought. Something like EDK2 can also be used. However, one would have to bring up the platform on those almost from scratch and adapt to this completely new bootloader usage workflow. Finally on, we didn't want to put lots of work into the boot process, so we decided to use something that already has all the required platform calling, an already existing bootloader implementation. And this implementation is the same Aboot. We took the reference Aboot code, which is based on little kernel from Hoverware reforms, and adapted to boot it on all MSM8916 devices. This project is now known as LK2. Initially, LK2 was only meant to provide the ANA first boot on Samsung devices as a bucket. But it quickly became almost mandatory for booting Linux on our devices. It makes lots of different preparations by giving the MAC address onto the device, booting secondary CPU cores to help with the fact that standard PSCI interfaces are not available, as well as some other things like conditionally patching the device tree and reading the boot file system. Right now, LK2 supports a variety of devices on multiple platforms, including MSM8226, Snapdragon 400, and MSM8974, and the Snapdragon 800. In addition to MSM8916, where we're using LK2 is almost mandatory to boot Linux-based operating systems. Around the same time when LK2 was started, we also were working on bringing up the display on the first MSM8916 devices. On a desktop, where many different displays may be used, there are predefined standards of video communication, like HDMI and DisplayPort. And more importantly, there is a standard of communicating the display capabilities, known as EDID. In the embedded world of mobile devices, there is none of that. Who needs to have a display detection code if you can hard-code all the display parameters in the driver itself? As such, even for the common transport, MIPi DSi is used. Every single display panel must be properly configured by the kernel. There are special panel drivers for that. Thankfully, making a basic panel driver is easy. Copy a template from another panel and just change the unit commands to whatever you're done stream users. Even more conveniently, Qualcomm encodes your sequences in the device tree of the devices, so we have really easy access to this data, even when we lock the kernel sources. Well, copying and writing a dozen of commands is easy, but sometimes there's a harness of them. Converting that by hand is not only annoying, but also error-prone. And since the data is really in an ASC and machine-readable format, and making a driver is just copying that into a template, maybe we can automate that. And of course we can. Linux MDSS-TSI panel driver generator, which we often abbreviate to LMDPG or any other similar looking combination of symbols, is a tool to generate clean upstream Linux compatible panel drivers just by passing the turn-stream device tree. LMDPG generates clean-c code with an aim of being upstreamable. And some people already got a couple of displays upstream with its help. However, the drivers it generates aren't perfect, as they assume one specific display configured for a single device. With 25 devices, we have over 30 display panels drivers, while we could probably have just a couple of generic display controller drivers. The unfortunate reality, however, is that the documentation for those controllers is almost unavailable. So we end up with many more simple drivers tailored for a single device, in some way how the displayed vendor decided to limit. You may even ask, why there are more displayed drivers than the devices? There is this problem. Vendors like to multi-sauce the parts. This means that a given phone may have one of the four multiple display panels. Each needs a driver, and there is no mechanism in Linux to select one of those, nor is usually known how to know which one is there. Do you remember mentioning that LK2 does some device tree patching? Downstream handles multiple panels by passing the common line parameter from the A-boot and deciding which unit sequence to send to the display. We don't want to implement that in Linux. What we do instead is that we read this parameter in LK7 and patch the compatible value of the panel in the device tree. This way, the kernel doesn't need to care that, for example, Redmi 2 is known to be shipped with five different panels. To manage all of the panel drivers that we need for devices, we have even more automation. A set of scripts called Linux panel drives, which access a repository of the device tree blobs and automatically builds the required drivers and places them into the proper place in the Linux kernel tree. This enables us to make changes to the generator and apply them to all panels instantly. Maybe one day I will hook up Gits and email to it. While talking about LK2, I have mentioned that the device will revise some digital signatures on boot. This is called secure boot. And unlike UFI secure boot, which should be familiar to many, this one is a bit different. While on UFI, one can disable the secure boot or replace the sign-in key with a special menu in the firmware. On the ARM devices, it's often implemented a bit differently. The public key, or its hash, is bound into the device's e-fuses, one-time programmable memory in the chip itself. Then, the boot ROM, the initial boot loader, which is also bound to the chip, verifies the next stages against that key to make sure that only official firmware can boot. All of this basically means that you have no choice other than to use whatever was signed and provided by your device vendor. On MSM8916, this includes all boot loaders up to Abooth as well as all of the specialized cores like Modem and Wi-Fi. Until recently, the solution to this in PostmarketOS was to just package the blobs for each model, which works reasonably well if you assume that only one firmware model uses a one-sign-in key. In reality, however, it's not the case. Samsung, in their greatest wisdom, decided that they should sign the firmware for each original variant even when the hardware on those devices is the same or very similar. For example, if you look at Samsung Galaxy A5, you might find that it can be one of the following models, as some A500F, A400FU, A500H, 500YZ, and so on, all of those devices can work with the exact same system image, but Wi-Fi and Modem will only work on this one for which the firmware is installed. On other, verifications of the keys will fail. Making a dedicated device port and the system image for each variant is a maintenance nightmare, so we had to find a better solution. This solution is MSMFirmwareLauder. This is essentially a small shell script that reinvents the wheel by running on boot and mounting the Android firmware partitions. Then, sim-link all of the blocks from them in the correct places so mainline Linux kernel can find those blocks and load them from the expected place. Since blocks are now loaded from the device, this firmware fun becomes to some extent transparent to the OS. Right now, the firmware is probably the biggest device-specific part of the system image, and with it being removed, the OS images can be modulated across the devices. MSMFirmwareLauder makes the system more generic, but there is actually one very big problem that stops someone from taking an SD card from one phone, plugging it to another one and expecting it to boot. You remember that Linux needs a device tree for the device it boots on. Usually, the boot loader can pick one device tree from a list of given ones and pass it to the kernel. However, the reality is that the vendors often didn't set it up properly, and different device models often throw some identification numbers their original boot loader relies on. Since we can't revamp that, an temporary solution is to add a single device tree to the system image and assume it's the correct one. Obviously, this leads to various small problems, like dealing with almost the same devices or having a possibility to boot the correct device tree on the device. Those problems is what I'm working on right now. LK2 already can know precisely on which devices have started, as it needs this information for other things. So we could reuse that to pick the device tree from some container. This will immediately have a lot of benefits. For example, some different devices variants can be handed seamlessly, the risk of booting can correct device trees mitigated, and the system image can boot on any supported device. The latter is especially what I'm interested in to achieve. Not only I have a lot of devices that I have to deal with, but this also implies another very cool thing. So far, we are focusing on post-market Earth as a distribution of choice for MSM-1916 devices, since it already prioritizes with a great touring, and it can handle both devices in creation. However, if we are to support our devices with more than one distra, I think the burden of maintaining COVID-25-dedicated device ports in many distras is a bit too much, especially with it being unlikely that all of them can be tested in a timely manner. On the other hand, I think it's unfair for those distras to start from scratch, waiting for someone who would make a port and test everything so they can start advertising the device. Because of that, I want our boot infrastructure to be able to support generic system images before we get the devices in different distras. This will allow us to provide a single generic port that will automatically provide all of the devices we support. And if someone is interested in polishing a specific device, they can derive from that and expand the device-specific port. Of course, there are other things to work on, for example, cameras and power saving, but those are rather complicated. For cameras, one would need to figure out how to initialize the cameras, because the initialization sequences are encoded in the user space blobs. And for power saving, someone would have to check every single driver used in a specific device to make sure it supports proper power saving features, as well as make sure that the system on chip can power save properly. In the end, I want to mention that we wouldn't have had a chance to face those problems if not for the amazing community behind the Linux and Phone efforts. There is a person behind each of those devices who did the porting and testing, as well as there are all of the people who make the UIs and software for this form factor. I am grateful for all the efforts that were put in by those people and companies. For those curious, in this presentation, I had following devices. There are two tablets, Samsung Galaxy Tab A 2015, both 8-inch and 10-inch. There is a Villefox Swift, which is this phone. The single board computer is Gini-DX DB4, which is a single board computer compatible with the RagnBot 400NC. Two other devices are based on MSM8226 and are there to showcase LK2. The smartwatch is LGG Watch R, and the orange phone is Nokia Lumia 630. Thank you, thank all of you for being interested in this topic and listening to my talk. And with this, the future of me should be happy to be answering your questions. So thank you Nikita for your nice talk on how you went about mainlining Linux for the Snapdragon 410. There were a couple of questions, so let's get right to it. For example, Martin asked how many MSM816 devices do you have now? So I counted a bit while we were watching the talk, and so if you count head by head, I have 15 devices, but excluding duplicates is about 5 maybe. Not too much, in fact. We have about 30 models supported, so it's just a little. Sounds like a lot to me. And CT12 asks, aside from the MSM816, what other SOCs have you worked on, be that Qualcomm or not Qualcomm? So I spent almost all of my time working on mainlining with MSM816, but I've mentioned I have a couple of MSM400 devices like this Nokia, the smartwatch, a Jelenik by the code name, and while I didn't work actively on those, I kind of poked at things a little bit, so maybe I will join people working on them, try to help them to do something possibly on the second side more than on the kernel side. And then there's also the question, which of those MSM devices of the couple that you have is your favorite one to use and why? So to use, I have mentioned already that I use a tablet, this one. And this is by far my favorite mobile device, not only MSM816, but just a mobile device, because it's basically a tablet and it's a Linux tablet because they have mainline on it. Some crocs like battery saving are noted as a parent on the tablet as on the phone because they can turn it off. And I was using it as my main tablet for almost a year now, it will be like a couple of months that I would use it for a year. I put it in my backpack as a laptop and it works fine, the Firefox, I can watch videos on it. It's amazing thing. Which tablet is that, like what vendor? It's a Samsung Galaxy Tab A 2015, this specific one is a 10 inch model, but the result is a smaller 8 inch one. So they are both very cool. Cool, we'll have to maybe get one of those at some point. It certainly looks nice. Al Noor is asking if LK second can also load files like kernel, Initrum, FS, device tree files from a normal X4 or a FAT32 partition. So LK second initially was booting basically Android boot image, but at some point, like I think like a year ago, really helpful for sure, I added basically there was a bit of interest to use it to boot from the file system because it allows us to do system updates. Since the R12 did a bit of work, he figured out that the file system drivers for my inline LK, Stefan, mine trial by the nickname, did a bit of cleanup for that, but it was slain in a separate bridge without some business logic to boot. And at some point, I took all of this and write a bit of code to basically scan the partitions and boot the same boot image, but from the file system. This allows to basically reuse whatever PostmarketRest uses for system updates and don't refresh EMMC like at all. LK second also supports SD card for MSM8916 for now. And I basically can take one SD card, plug it into the device, it will load the boot image from that SD card and boot the router pass. So that was one of my goals to achieve and this work from those other people who also tried to start on it, I finished it. And it works pretty good. For other files like DTB or dedicated kernel files, it's not really basically my interest for LK second for now because we also want to have first boot boot walking and unified in a way unified boot image helps with that. But in the future, we could basically just load the kernel file and the DTB, it's a bit more complicated to select which DTB would we boot because we want to support everything. For now, we have nothing but I have some ideas to fix it. Okay, thanks for all the exhaustive answer. Olli is asking if it's possible to boot LK second from that Windows boot loader that you showed in the presentation. It's a bit complicated. Windows phone people had a couple of shimbu orders before that. So UFI boots this shimbu order, it boots like some other thing and it's already boots LK second. We had to patch LK second a bit as well to support that but you end up in LK second and at that point it's basically any other device with LK second. Okay, interesting. Currently, I can't see any other questions but you've mentioned to me that you had a fun story to share. Maybe we can do that now. Yes, it's like a little bit unrelated to the talk because it's more kernel question in a way but basically you know that, how do you explain it? There is a feature of the phone. No, that won't be fine. Okay, I will describe it a bit differently. Basically one person went to us in the chat and they asked that their model stopped working like at all. They mentioned it stopped working in the middle of the call. So I spent like half an hour debugging it and so what I discovered, like my idea of what happened, their model was basically disabled in the settings. And I think what happened, if you look in Fosh, you know there is this button. And on Android devices, it's supposed to disable mobile data. Here I think it disables madame completely. I think what happened is that they were talking on the phone and they pressed the button with their ear. And on most devices, we didn't have an proximity sensor enabled at all. So the touch screen was enabled, they were talking and disabled the madame. I spent like, I didn't understand what was happening until they mentioned that in the settings, the data was, I was like, is this mobile data or is this madame? That basically a little, a small, very small feature of proximity sensor. Like we kind of forgot about it in a way that it works in the CISARPath, but we basically needed one small property for the IEOS sensor proxy to know when it's near. And without it, we were testing it in the CISARPath. It works fine, we can keep it and move on to other things. I basically had to figure out why it didn't work in the user space after the space and we had a problem. I've definitely been there, especially before Fosh started to actually blank the screen when there's an ongoing call. Chris Simons is asking, what tools are you using to, or the most to discover how a device boots? Is there ADB, Android debugger, or even GBB? So how device boots? It's like basically, I kind of don't get who touches this question, but in a way, I realized some previous knowledge from how Android boots, because there is a little bit of documentation, a little bit of other knowledge about which code load is, based on that, mainly. So special debactors.
The Qualcomm Snapdragon 410 (msm8916) is a SoC that was used in many smartphones and tablets around 2015. It is the most mature "aftermarket" platform postmarketOS can offer at the time of writing. Many of the supported devices are quite usable and have most of the expected features like phone calls and mobile data working. The talk goes over some of the most important challenges that we have faced while supporting those devices and describes the ways in which we have solved them. Apart from the Linux kernel, we focus on various other tools and projects like lk2nd - a shim bootloader that prepares the environment for booting Linux and hides some device-specific quirks from the kernel. It also unifies the boot and installation process on all devices. We also have other tools and resources to make porting easier. Those include various documentation or even a fully automated display driver generator that helps with the fact that each display requires unique initialization.
10.5446/56929 (DOI)
you you you you Hello everyone. Thank you very much for joining. The idea of this Fosh Get Together is basically since we can't have a real life meeting, we want to take the chance to kind of like get to know each other a little bit because we've been like, there's interesting enough like several people from different projects and that makes this pretty interesting. Maybe one sentence about Fosh for those that don't know. So Fosh is a mobile shell for mobile devices based on GNOME technologies and it was initially developed for the LibreM 5 by Purism, but it's also a community project with contributions from all directions and that basically brought about the idea of this Get Together. I would suggest that we maybe make a short introduction round so everybody kind of like tells his name and which maybe which devices you use Fosh on and what distributions and what you worked on in Fosh and maybe what you intend to work on and what brought you here. That would be nice. If that's okay, I can start. My name is Guido. I'm working with Purism on the software side of the LibreM 5 phones and sometimes which involves basically work all over the stack starting from the kernel mostly in the display subsystem, but also some work on the shell side which is Fosh and the compositor and to somewhat lesser extent on the on-screen keyboard called Squeakboard and I also contributed a little bit to the related components we basically build upon which is like GNOME itself, W-Root, Glib, GTK, GNOME settings for example. Myself, I'm using Fosh on the LibreM 5 phone which I basically use daily as my only phone and I have a OnePlus device for comparison to see kind of like how it works on other devices. That would be a short introduction for myself and I'd part it on to the next one. Why do we want to continue? Sure, thanks. Well, hello everyone. My name is Arnaud. I'm the founder of Mobian which aims to bring Debian to mobile devices. As such, I do a lot of world packaging software for both Debian and Mobian and while I happen to contribute a bit to every project throughout the stack, so you'll find patches of all but me and the kernel in new boots, in Fosh, Squeakboard, basically small contributions everywhere. I'm more of an embedded developer so rather low-level programming originally so I don't often touch user-facing applications but more like system integration and basically configuration, system D and so on. Thank you. Next, according to the tiles on my screen, would be Drota. Hello, my name is Drota and like Kido, I work for Purism on the LibreM 5 phone. Squeakboard is something that I came up with. I mean, the name, the software was basically lifted and this is mostly what I'm working on. I split the time working on Squeakboard and on the camera stack more recently but I have touched pretty much every part of the LibreM 5 stack at some point. Yeah, I'm not really actually using Fosh that much, basically just on the LibreM 5 phone. I'm not adventurous enough to install it on my laptop but I keep working on it. Thank you. Next up in my tiles would be Flo. Hello, my name is Florian. I'm a software developer in some boring usual company in the LibreM but some days ago I started to learn Rust because I was bored of coding in Java and this is how it started. I only used Linux and at some point my, I don't know, some smartphone, my Android phone broke so I just ordered a pineapple and now I'm very eager to get this whole ecosystem up and running. I never wrote any line of C code, not production code at least and now I spend some time and add some minor contributions to Fosh, mostly with the help of Guido and yeah. And my dream is to do this for my living but we will see. For now it's my hobby. Yeah, but hobbies sometimes turned actually into the living that sounds like a great plan. Thank you very much. Next up in my tiles would be Sebastian. So, hi, I'm Sebastian. I also work with Prorismon LibreM 5. I'm mostly responsible on the compositor, a code fork that's mostly used with Fosh but my work also happens to get build a rich stack, what else? I've been using GNU Linux phones as my main phone basically for many years now since the open moco days. So, happy to move that forward. Great. Thanks a lot. Next up would be Tobias whose picture is frozen so I'm not sure we're going to hear you. Oh, we hear you. Okay, sorry. I mean, I can try turning it off and on again but yeah, I don't know. Can this picture work now? It's the same but it's a good one so. Oh, okay. So, hi, I'm Tobias. My camera doesn't work today. I don't know why. No, I'm a designer on the LibreM 5 team and also generally on the GNU design team. Yeah, since many years at this point I guess and I do design on everything where it's required including the shell, apps, design patterns. I mean, I don't know. I think it just so happened that a lot of the design that was needed over the last few years was more on the developer design pattern side. So, now I'm giving these talks like today that are like would maybe be better given by like a developer but I guess that's sort of how it happened. I'm kind of like all over the stack when it comes to design. Thanks. Then I think the last title I see at the moment is DevArchivet. Do you want to introduce yourself? Yes, sure. My name is Evangelos and if you've watched the previous talk you may or may not know already depending on if you paid any attention that I'm working on GNOME calls for Purism. I maybe have a remark because of the floor you said something about turning your hobby into what pays your bills. That's more or less what happened for me. So, when I got started I had the Piant phone which I got like two years ago and yeah I saw Arnaud's work with Mobian and I was immediately blown away and I had a lot of free time on my hands because of the pandemic and so I just started contributing and that in the end led to me working on GNOME calls. So, it's certainly possible. Oh, good. No. Yeah. That's a good hint. We have two more people in the chat that basically signed up to join which is Zenn Walker and Chris but they are as far as I can tell not here yet so we can just kind of like, I hope they join. Otherwise we can just like continue chatting a little bit. Is there any any any Fosh related stuff or any other stuff you want to chat about otherwise then just go ahead and we can also take questions from the from the chat if there's any Fosh related questions maybe. Otherwise we have no formal procedure and I can only hand beer virtually which I'm trying to do now. At least I am not seeing any questions but I noticed something from Florian that's your Florian you are you have moved from Java to Rust. Yeah. It just so happens that I am the Rust ambassador of sorts in Fosh so I'm really excited that there's more people coming if you want to get properly oxidized I invite you to take a look at Quickboard. I already did. How do you find this? Overwhelming as usual when you look into new projects it's a lot of code. I like Rust because it allowed me to code. I don't know why but I was never able to do this to just get started and see and doing it for Fosh took a lot of help from Guido's side but I never had this issue with Rust. I don't know why. At some point I'm very happy to contribute to Quickboard right now for one month I think I'm working on a merge request for Fosh so I'm obligated there and on the other hand I want to help with an app ecosystem so that's another sidetrack for me. But also Rust. Having seen that merge request it is a rather big one so I think working on that for more than a month is certainly understandable. Yeah I think you picked a complicated piece with lots of things that need to come together. It's okay as long as no one is in a hurry. No no no. I think it's always kind of like when you have something that is almost finished or at the point where your work is then one wants to finally have it merged because then it can be used because out of tree code just usually sits there and this is not used regularly. So that would be cool if that moved on and I hope to get to that next week to give us another testing and review round. But I think the last time we were basically pretty close. There was some bit missing from the requests but that should hopefully work out. Yeah we'll see. Great. Anything else? Because otherwise because you talked about the app side and I was kind of like wondering several things. That is for one I think you gave a note about the calendar application you're writing which I found interesting. Is that right? And I was wondering how like when you say it's easier to get started with Rust is this also true for GTK.rs? No no. For me it was really helpful to learn GTK by working mostly with you getting receiving your comments and everything and reading the C documentation while being able now to understand more of the C code. But there is this upcoming Rust book which is very very helpful and having more examples and people asking questions on you know some forums and everything would be helpful. So by time it will it will be as easy as for experienced C developers I guess. I hope. Yeah I hope so too. And then I think it already got kind of like magnitudes easier than it was initially with doing Rust with GTK with Rust sorry and I kind of like didn't follow it too closely for some time and then I had a look recently and I was kind of like impressed how well it integrates nowadays compared to how that was. That will actually really be interesting. Yeah and because we have to buy it here the interesting part is that the design feedback for the run dialogue it went through pretty pretty quickly. I don't have that luck usually. There's usually lots of design feedback where I need to fix things on the foreside. Yeah the other thing I was wondering kind of like because I like for just like the name people stick to this thing which is basically like the foreshel the compositor and the keyboard on top of basically like what a GNOME technology stack is what kind of like are there any any apps you're missing currently for daily using the phones that goes basically out to to everyone. Because I think that's also interesting kind of like where we wanted to focus on to to get more people to be able to use it like on a daily basis. Actually I've been using the push-based system on Mobian for almost two years now but I guess I have pretty low expectations in terms of application availability. Overall what's nice is that most most of stream developers start to really care about making adaptive apps at least in the GNOME ecosystem and we're seeing more and more apps which are either designed for the mobile use or pre-existing apps which are being redesigned so they work fine on mobile and overall the my experience with this whole ecosystem has only improved over the past two years in a dramatic way. That's great to hear. Maybe I just also want to chime in. I've also been using my devices with Fosh for the last two years maybe the difference for me is I've never ran any Android devices so I also started with let's say low expectations as if they could if the device is able to do calls that would basically satisfy my use cases. But I do understand that it's different for everyone on what is considered daily drivable. Yeah I think the same that's why I'm usually very interested kind of like what people are actually running. There's some questions in the chat. Should we go about answering them? Is that okay? I think the top most one from Dylan. What is the current status of swipe support? I think this is I assume this is about top and bottom swipe. I need to clean out the compositor side actually which is sitting there since December and I didn't get to look at that again. And I think the top draw will be the settings will be pretty easy then. For the bottom side we want to rework the overview to make that suitable and thankfully Adrian picked up work on that and we already landed some initial work working towards that direction. And if that goes on we can then also make the bottom brawler swiple at some point. A road map is the next one. I don't think there's a formal road map at least none I'm aware of. There's certainly things like swipe support which are like higher on the list than other ones. And other things like emergency call support which is also being worked on by somebody at the moment by Thomas which isn't here. That would be one of the next things but then there's like no formal road map but we could actually if that would help somebody maybe put that out at some point because there are obviously things that are higher priority than other ones but that's not been formalized at any point. And in general terms we have obviously a rather long list of issues in every of the projects so there's always something that needs doing. Exactly it's basically like more about like putting relative priorities somewhere and they do have these but they're not like spelled out anywhere and it's very often it's kind of like if there is something somebody works on then it happens sooner than if there is somebody not nobody works on so if there's anything in particular picking up that issue at hand is usually the best thing to do. So one thing that maybe is worth mentioning when it comes to road map question. Jogito already mentioned other than picking up things with the overview. So I think it's worth saying that there is some work happening there that will allow us to have paginated application launcher. So of course that's a longer term thing but it all leads to the state where you should be able to have pages that you can move applications reorder applications between them and organize your launcher icons this way. So yeah this will take multiple steps but some work is already happening there so I guess this is something high on the road map. Yeah but this is actually like one of the things where we got kind of like Adrian like developer time from somebody skilled in that area kind of like to work on that and kind of like that bumped it like additionally and that is really cool because we also have like the initial stuff merged there. Then there's kind of like I think that is kind of related to what you said Sebastian is like there's the question about the home screen and if we consider adding something like that that would probably be something for Tobias from the design side if he wants to touch on that. What do you mean by home screen? I don't know. I mean if you mean like an android a second place where there's a bunch of apps then the answer is no because like that is the garbage fire on android and I think we should repeat that. I think like in that respect we're closer to like what iOS does where like there is a canonical place where the apps are and then you have multitasking except in our case like multitasking takes a more center stage kind of position compared to the app grid which I don't know. I think from the GNOME side like we've always like followed webOS in a long ways and that is like how they did it back in the day and yeah I think that is like for how most people use phones like that is the superior paradigm because like you're mostly not launching the stuff you're switching and so yeah I don't know like if people mean like additional things in the app drawer I don't know you could think about like widgets or something like that people usually think of like as like home screen stuff in the app drawer potentially but like I don't think that's particularly high priority like the structural stuff around like navigation and like yeah figuring out all this wipe sense on is probably the highest priority stuff. Yeah and maybe like another thing that is ongoing or not not really ongoing off and on is basically like adding more stuff to the lock screen because nowadays you basically right lots of stuff are happening on the lock screen so you want to kind of like have the quick settings there and other things that is basically like decided and I think basically also clear where we want to move design wise but somebody needs to get around to to change the code. There's also like the question about landscape mode sure that should be better it's like not super high priority and it's kind of like will be easier when we landed the swipe gestures because we can then kind of like move whole panels out of the way. Yeah then maybe if I wanted to ask I'll know all the kind of like from the Mobian perspective or from like other downstream like packaging perspectives is there anything we could do better making it easier to grab for distributions? That was some of that. Actually most of the most of the packaging is quite easy and well documented in terms of external patches needed. It can be a bit annoying at time because while Fosh being evolving quite rapidly we need some downstream patches or un-released patches for other projects but actually in general it's not a big problem. The real pain point and we saw that a few days ago in the last few weeks for this quick board is mostly the Rust ecosystems in distributions. I mean we it's rapidly evolving ecosystem too and all distributions don't package the same version of the Rust dependencies and it can lead with packages such as a GTK-RS with major problems due to incompatible API changes. So I'd say for now the biggest grief I have regarding packaging and it's not only Fosh and its ecosystem but also the new apps being developed for GNOME is that a lot of them use Rust which is both a blessing and an occurs. Rust in a really nice language and I've started using it also a few months ago and I'm amazed at how it can improve many things when compared to C or other languages but in terms of ecosystem support and distribution packaging it's really a mess currently things are improving slowly but we're not there yet and that's the main problem for now. So I'm pretty glad that Fosh itself isn't moving to Rust right now and there are only a few small pieces relying on that but hopefully this will improve over time and we'll be able to put in major Rust applications without any issue. Yeah I hope so too it's kind of like that once that gets like more easier and less breakage that would actually be very cool. I think we have Zen Walker now kind of like did he join the session not yet no. Now so Evangelos asked me to kind of like if we go over the 30 minutes which would happen in like one minute and 15 seconds just to move over to the closing ceremony because I think that will be another open Jitsi room where we could continue chatting if that's fine with everyone and otherwise kind of like we have one minute left for any questions comments anything else? As is what discussed just a few minutes ago I'd like to add a few comments on the home screen thing. I assume that the closest part to a home screen with all your preferred applications would be the favorite bar in Fosh. However there's been a comment about having program groups and being able to group several similar applications or several similar use cases in two folder and I think that's a very interesting idea which would be worth thinking about and maybe Tobias will have a few comments on that when we move to the next room. Okay we have 10 seconds so I'd like to thank everyone for joining I hope we can repeat that at some point and enjoy the last hours of FOSSTEM and I hope to talk to you soon.
You're contributing to Phosh or its wider ecosystems as designer, translator, distribution packager, tester or developer (or intend to do so)? Then join us at this get together. There's no formal schedule, it's just about meeting other people since we can't have a RL meeting.
10.5446/56930 (DOI)
Welcome, I am Oliver Smith and this is Linux Mobile versus the Social Dilemma. They say good presentations start with a story, so here's a little story. This is me in 2009. I'm walking down the street listening to music and I see some people in the distance and I recognize one person in particular. I think I know them because they look at their phone like this, but it's 2009, right? So not many people have this gesture and look at their phones. And so I think this is probably that one person in my class who got an iPhone. And as I think about it, I find it remarkably if it was true, then I was able to recognize who this is just by the weird pose of looking at their phone. And so I get closer and it turns out, yes, it is really my friend from my class who is looking at his iPhone. So this is in 2009. And now today it's more like the opposite because you can recognize like everybody is looking at their phones and it's almost easier to recognize the people who are not looking at their phones. And the problem is the phones, while it makes you happy short term to look at your phone all the time, it doesn't really make you happy a long term. So this is what the social dilemma is about in a nutshell. It's a movie which came out recently, but you don't need to have seen it to understand this talk. I'm going to mention some of the issues which are mentioned in the movie in the beginning. And then I'm going to talk about what we can do about this in Linux mobile. So this is the Linux mobile ecosystem. I'm with PostMarketer S, but this is just one of the distributions. It's amazing how huge it is right now. I think we have like over 20 Linux distributions. We have various user interfaces, Fosh, Plasma Mobile, SXMO, which I run for this presentation right now on my phone. We have companies invested in this who really try to run Linux, run mainline Linux on phones. And yeah, sorry if I forgot your project. It's hard to keep track of all of them. But yeah, let's figure out what we can do about this problem. So, why is it that we stare at our phones all the time? The problem is that the phones, the software on the phones and the companies who write them are driven by an attention-based business model. So they get more money the more time you spend looking at their apps at their operating system. This is because they try to do advertising oftentimes. You can find a lot of apps which are free, but have advertising built in, or they try to sell you a product. For example, apps or in-app purchases. And this all requires you to interact with the app, with the operating system as long as possible. Now, how would I keep your attention? As you probably know, companies create detailed profiles of you, and then they can predict what you like and give you an infinite feed of content. So you probably heard of doom-scrolling, which is when you go like this all the time and find a new interesting message after the next message. And they have autoplay, of course, so you watch a video, and then it automatically watches the next video just to keep you in the loop. And you don't even think about, much should I quit now or not, unless you really pay attention to it, and you have to put an effort to actually quit watching and such an infinite feed of either messages or videos. And of course, they recommend videos and content to you also to keep you engaged. And actually, it leads to showing you more and more outrageous content, because that's the stuff which keeps you interested. If boring stuff happens, then you will stop looking at your phone. So algorithms decide what the world will consume next. This has an enormous impact on the world. Think about it. So many people are consuming content from all these brands, Apple, Facebook, Google, Instagram, which Instagram belongs to Facebook, Reddit, Snapchat, Twitter, YouTube. So it matters quite a lot what gets recommended next to you, what gets amplified. The problem is these algorithms are not open source. So you can easily verify what they are doing. There is not even an API for third parties to analyze what gets promoted. So in this movie, the social dilemma, they showed one researcher. I'm not going to attempt to pronounce his French name because I'm not good at it. But he worked at Google and YouTube and then left the company because he tried to change the company for the better from the inside, but they weren't interested basically. And so he left. But he found a way to actually measure how what gets recommended and prove that it's bad, what they recommend. And so of course, as a hacker, I was interested in how does he actually do it? And it seems there are two methods to do it. So what he is doing is he built a script basically or a program which will clear the history, then start watching one video and or pretend to watch one video and then look at the end what gets recommended and put it into a database and then repeat the process. And also figure out what happens when you keep following the recommendation stream. So you like clear your history and then you watch maybe 20 videos and then you look at the result. Where do you end up with? And it turns out that you look like as an example, basically you look at cat videos at one point and it turns out in the end that you landed flat of theories. Something like this is what you could actually prove. So there's an article from the Guardian. It's outperforming reality, which is the primary source for this. I believe he's the main guy who analyzed the stuff. And I also found that there are other methods. So actually there are a GPL license browser addons which you could install, which then track what recommendations you get and submit them to a database and so somebody else can analyze them. Not that I would recommend that, but it's I found it interesting how it works. So anyway, they got YouTube and Google to actually changing and tweaking their algorithms and recognize that this is a real problem. But in the end, they still have the attention based business model and it works always against the effort for making the algorithms more humane. So you can't really solve it in the end unless you give up that business model. So what happens to society if almost everybody is consuming content from these proprietary algorithms whose purpose is to maximize attention instead of doing something good for the human who consumes the content? There's this so-called ledger of harms. This is a website by the Center for Humane Technology, which are the people behind the movie, The Social Dilemma, and they made a list of things that happens, which is research based and has credible sources. And I made a slide where I just copied the categories and the summaries of the categories to give you an idea. When you go to the website, you can click on each of these and get more concrete examples. And as I mentioned, the sources. So it starts with the next generation, the next generation's children. It harms their development and unfortunately also increases suicide rates. So they have an example there when somebody gets cyber bullied, they are more likely to do a suicide because the reach is much broader than when it happens offline. And there are more problems. As I said, I'm not going into each of them. Just some, I mean, one which you are probably aware of is all the misinformation that's going around conspiracy theories, which are amplified, then that it's bad for attention and cognition and all the stress you get. So it's bad for your physical and mental health and it's bad for social relationships, of course, and so on politics and elections. If you have seen the Cambridge Analytica scandal, if you follow that, then you also see what's going on there. And one thing is also striking that, well, the people who actually work at these tech companies in Silicon Valley, oftentimes they don't allow their children to use their own technology the one they produce because they know it's harmful. Okay, so what can we do in Linux Mobile to help fixing these problems? And now you might be thinking, what we can actually do something because nobody is using this and this is so painful, why would it even be relevant that we can contribute anything to this? So I think we can do quite a lot and, well, Linux Mobile may seem insignificant now, but as I've shown in this other slide with all the projects on it, I think we've reached a critical mass of developers and projects by now. So it's very likely that this will go on for at least a couple of years, even if some of these projects should fail and I hope none of them does. I hope every project of these thrives and grows bigger and we all grow bigger. Yeah, but I think the chances are very likely that this happens. And this means, of course, it will get better and better and at some point we will be able to hand this maybe to our parents and grandparents and tell them here you can use this as phone. So it will be user friendly for them. And we have one big advantage actually, we do not follow an attention-based business model. So in contrary to iOS and Android and all the projects that are very close to them and kind of have to follow what they are doing, we are as independent as it gets from that and we can really make it our priority to respect not only the user's freedom but also their attention. So what can we do in our software? You can write your software with the explicit goal of avoiding the attention crisis seen in iOS and Android. And here are some examples. You can avoid dark patterns. Everybody hates those, right? Like in Cookie Burners you click no, I don't want to accept the cookies. This is what you mean. And then they automatically check all the boxes and you get all the cookies accepted and you want to throw your PC out of the window. So don't do that. Consider what apps you or your OS recommends or pre-installed. I find this quite important because well, if you want to get away from harmful technology, it's important that we don't pre-install it, right? So if you have links to Facebook in your default bookmarks for example or maybe even pre-install it, this wouldn't be a great idea from the perspective of getting rid of this attention crisis. Then one can look at what content they are recommending in their own software. If it's really useful to recommend other content at all, maybe you don't even need it. Or if you do, can it lead to amplifying harmful content? If it's completely user-generated content, you need to think about it. How could it be exploited just like when you write a program and you have content from the user, you have untrusted input from the user, you need to think about, okay, how can they exploit this and take over my program. It's a similar situation, but in this case, they don't exploit the program, but the human who is using the program. And of course, you can also disable the stuff by default and make it opt-in if it's useful for some people, but maybe not useful for most of them. Then that's a good strategy. Make sure your operating system is in control of the apps and able to patch out harmful features. And there are two ways to do this. Basically one way is just have everything installed from your package repository. And the other way is you can use something like Flatpak on top of it. But then, well, you kind of need to make sure that you are in control of the apps in the repository which you add by default. So probably not flatpak.org unless you control them. I think Flatpak is a very useful project, but you need to keep in mind that if you use flatpak.org, you will have proprietary applications, you will have applications which have their own agenda which you cannot control, you cannot patch stuff out if you really need to. In the end, they have the power over your distribution. And Purism is a good example. At least from what I've seen, they plan to make their own repository. I didn't look if it's actually implemented like this, but this would be one way to do it. You can make it like elementary OS for example, that they also have their own repository and you can add flatpak.org, but it's not installed by default. So at least the default apps which most people are going to use are sanitized by the distribution. The last point on the slide is make it easy for users to find information they need and hard to scroll mindlessly. So what I'm saying here is, well, give them, maybe don't implement infinite scrolling depending on the app, but rather give them an end of the page. And so they can think about, well, do I want to keep on scrolling here and waste another half an hour or do I want to get up to something else? And you have to think if it makes sense for your app. For example, there is KTrip, which I use to figure out when the bus is arriving and they have infinite scrolling there, but I don't think it's a problem there because I'm not going to scroll infinitely and look at, oh, this is so interesting. Look at how the bus is going to arrive tomorrow and the day after and the day after. So I will stop by myself then, but if it's a social media application, then it's not so easy to stop. And again, you could have this configurable. You could set the default so it doesn't scroll, so it doesn't enable mindlessly scrolling and let the user change it if they decide to do so. Here is maybe a controversial one. So I'm saying Linux mobile doesn't need an app store and the emphasis is on store. So think about what it's like to go into a real-life store. Say you get out and want to buy a calendar, then you enter a supermarket and as you enter, you get distracted by big signs of what has reduced price tag and maybe advertisements playing over speakers. And then you'll see shelves in front of you full of unrelated, shiny things, which then, of course, you consider, hey, should I buy this? This looks interesting. That's new. And you really need to put some effort in to actually get to the calendar section. And once you are there, you also need to choose which calendar do I want to buy. That's not an obvious choice oftentimes. And once you made it through that and put in some more effort, then you get to the cash register and this is kind of the final funnel where everybody passes through and they put the most addicting things there like alcohol and cigarettes. And you also need to put in some mind power, especially if you maybe are addicted to those things to not buy them. And yeah, of course, the store has a business interest in getting you to buy more than you came for. And this is why it is like this. They have the incentive to optimize for profit and this is how it ends up in a supermarket. Now what's the app store experience like? You want to install a calendar app. You open the app store and you get distracted by unrelated apps with shiny icons. This is especially the case in the Google Play Store and the iOS store from what I've seen. But it's also happening in, for example, GNOME software to a lesser extent because there you get categories shown. For example, a games category, which is completely opposite from what you might be looking for and is a distraction if we think about it, right? So then in the end, you have to type in calendar into the search bar because that's much quicker than browsing the categories until you happen to find a calendar app. And typically you also find multiple calendars and now need to check them. Well, what's the difference? Maybe I use a more GTK based interface and then I should use that one. And it also takes some mind power to figure out the best app, which works for you. And in the end, you might end up, hey, you found 10 other interesting apps and you will install them and you don't get the calendar you came for and maybe you wanted to use the calendar to organize your life and now you end up playing some random game, which is not a great outcome. So what I'm saying is if you have one task in mind and you want to get it done and you open your phone, it shouldn't be like then you have five things on your mind, which are completely unrelated and forgot the one thing you wanted to do. And this is how app stores work today. While I was making these slides, I thought about, well, if we applied these ideas to an app center, I don't mind if you call it app store, but think about how it should look like, right, then it could look like something like this. We would ask the user, what do you want to do? Do you want to go outside, organize yourself, do something sports health related, or do you want to consume media? And you see, I've put the task, which are actually beneficial for the user on the top and consume media is a broad category for all the things like playing games and so on, which will distract you and take away your attention without long-term benefits. And so you click on go outside, then it says, okay, do you want to navigate somewhere? Do you want to get a bus or train? Find restaurants, whatever. And then you click on get a bus train and it recommends K-trip to you. So you don't even have to think about, okay, which of the 10 apps should I install? This is the best app supported by the distribution you are running and this one works with your user interface. This is how it could be, right? So it would be very easy. You would only tap a few times and you have the one app and you're done. You are not distracted by other stuff. And in case you do want another app, you could always click on show similar apps or search all apps if you want, I've also added a button there at the bottom. Here's another example. This is how it would look like when you click on consume media. Then it says, listen to a podcast, listen to music, play a game, watch videos, do social media. And of course on social media, you can also say, well, it's work related for me. I go there to post stuff. But in reality, it's always the case that when you open social media, you are always also consuming content. You see comments to your post, you react to them, you see the likes and they go to great lengths to show you other content which is happening on the site. If you are a professional social media person, if that's your day job, you will look at other posts too and comment to them. So you are always also consuming content on social media is my impression. Okay. So you said, listen to a podcast and we have Casts, which is a good KDE podcast player. And you can go to show similar apps and maybe you want to install G-Potter instead or not. But you could do this. And so here the idea would be that the distribution says, okay, Cast is the one player which we actually tested, which we know works well. And we also have G-Potter which some people use, which is less tested. So if you really, if you know what you are doing, you can use the other one. But this first one is the one which will work properly. Yeah. So as I said, this is an idea. Maybe we could implement something like this. This is not implemented anywhere from what I've seen. And I didn't start implementing it myself because it's a lot of effort. But I want to get the discussion going and get people thinking about it in these terms of how can we respect the attention of the user more and how can we design our apps in that way. And maybe I got a bit fed up with the socialize button in GNOME software, which stands for social networks in their case. But I think it's the complete opposite. I think social media brings out the worst in humanity and it doesn't lead to improving your social relationships. So I think this should be renamed. And we shouldn't refer to social media as socialize social. It's not that. With that being said, I appreciate everybody who worked on GNOME software and it's great and all the other software centers are too. And I also love fDroid on Android really. But I think also there's a lot to improve regarding respecting the user's attention. The section is called where do we communicate about our project. So choose your platforms wisely. You might, well, if you go from the opposite direction, you might be thinking, why do you need a platform at all? Why don't you just keep developing on your own and just don't care about everybody else? If you can do that, that's kind of great. But at some point you probably want to share your work. So it is useful to actually publish your work at some point and share it so you can attract users and developers. And I don't mean annoy people, but I mean find the people who would be really interested in what you're doing and for whom this would solve a problem. And then they can use your software and contribute to it and then it can actually scale. So one example is famously the Linux kernel was announced at the Combo as Minix use net group in 1991. And I thought that went pretty well so far. So what should we do in 2022? And should we keep posting to Combo as Minix? Probably not, especially with Linux mobile. It's completely unrelated. But yeah, maybe social media is one idea. So surely you can reach lots of people and going where your users are, as they say, makes sense. But you have to consider the downsides to, as I said before, you're not just publishing, but you're also consuming at the same time and you are exposing yourself to these harms. So I don't know about you, but I caught myself scrolling mindlessly on feeds with messages. That's one thing. And of course, you promote the platform because people who want to figure out more about your projects have an incentive to go to that platform then and read about it. There are what you write there. Yeah, so it's not black and white. And I can understand if you want to keep using social media and we use it for post marketer as we had a long discussion about this in our team. But if it has to be social media, you should at least really, I mean, in general, you should think about if you want to use social media at all. And if it has to be social media, consider open source and decentralized platforms. We have Masterdome, which is awesome compared to Twitter. Really, it's quite the achievement. And Linux mobile is there. Actually there are a lot of people there. It's a big community. So if you have to decide between Twitter and Masterdome, go for Masterdome. Then we have PeerTube instead of YouTube, which you could use and we have Lemmy instead of Reddit. So at least consider these options and think about if you are only on Twitter, maybe you also want to go to Masterdome or yeah, things like that. Discuss it with your project, with the people in your project. You can also build one way street. This is what we did in Postmarketer S. So here's a short timeline. In 2017, there was the public launch and there were blog posts on my blog and later we moved it to our own homepage. Then we did Reddit and Twitter pretty much since the beginning and still going. And we used ISC and Matrix for chat again, open and decentralized. 2018, we also joined Masterdome. 2020 started a podcast and 2021, we reconsidered our approach to Twitter and Reddit. This is what I just mentioned. We really discussed this and we're like, well, aren't we part of the problem if we keep using Twitter? And yes, we are. But also we don't want to give up on all the users who are following the Postmarketers account there. And so we decided we build a one way street. And yeah, so around that time, we also joined Lemmy, so we have an alternative to Reddit. And now what I mean with one way street is that we point people from Twitter and Reddit to Postmarketer S.org to our homepage to our own content, which we control. And we avoid pointing them back from our homepage to Twitter. So there shouldn't be an incentive for users who are browsing the homepage to go to Twitter. But only the other way around. So at least then we reduce the harm that more people are signing up to Twitter because they think, oh, well, Postmarketers is there. So it must be a good thing. And yeah, we actually added this huge disclaimer on our homepage. Yeah, basically we say, okay, we use open and decentralized platforms, Matrix, ISE, Masterdome, Lemmy. And if you really want to, you can go to these proprietary platforms below, but there are several downsides. And this doesn't even mention the attention stuff I'm talking in this presentation about. And yeah, with all that being said, maybe your project doesn't need social media at all. So we, for Postmarketers, we have made up our minds, we have our position, but maybe your project doesn't need social media at all. So if, and also for yourself, if you want to stay informed about the Linux mobile scene, you can just subscribe to RSS, Atom Feeds of Linux mobile related blogs and news sites. And we kind of have great ones of them. Actually, we have linmob.net. This is run by Peter Muck, who creates a weekly list of everything that has been going on in Linux mobile. It's a great alternative to consuming Twitter really, because you'll see all the important stuff and it only comes out once a week. And when you are done with reading the list, you are done. And another website, which I can recommend is tuxphones.com, which also have a lot of Linux mobile related articles and doesn't focus on one distribution alone, but on the whole scene basically. So what can you do to tell people about your project without using social media? You can contact the established content creators from our scene, like the people who run the sites above, just write them a mail. They have contact information on their homepage. And maybe you can post a guest blog post there, or maybe they can link you in Linmob's weekly thing. You can get linked if you have an interesting project for sure, I think. And I think tuxphones maybe accepts guest posts, or maybe they want to write something about you. So you can totally get something out there. And as I mentioned, people read these sites already, so you don't need to additionally post it somewhere. And these people also run their own social media accounts, so they will take care of that. You don't need to interact with social media then yourself. What you can also do, and maybe do multiple things of this list, you can start your own blog, which is great because then you are completely in control of your content. And again, you don't have anything unrelated, you have just your own content. Another thing that you can do is start your own podcast. So this is my favorite way to keep up with projects actually and to learn about new projects, because then I don't need to stare at the damn screen all the time. I can just go out and put on headphones and walk around in nature maybe and learn about the project that way. And there is so much more that comes across in a podcast. It can of course be much longer than a blog post. You can have a 30 minute, one hour, whatever you want discussion of whatever is going on in your project or introduce it or whatever. Whereas you wouldn't read a blog post that takes you one hour to read, at least I wouldn't. And of course it's nice to hear the voices of the people who make these projects. It's just so much more emotion comes across when you hear them. And another thing which has a lot of emotion coming across I guess is when you do a talk at a conference like this, like FOSDEM, KDE Academy, maybe your Distro Specific Conference, Alpine Conference coming up this year in May I think. Actually looks at their own conference, something like that. There are lots of conferences where you could do a talk about what you're doing and present it. And then we can actually see you which is also nice and more personal. Thank you very much for watching this, for your attention. And if you also think this is important, then consider doing the following. You can help design Linux Mobile upstream and inside your own operating system to help fixing the attention crisis. You can prefer open and decentralized platforms over proprietary ones. You can get out of social media or turn it into a one way street. You can watch the social dilemma or listen to your undivided attention podcast. I think both are really excellent. Actually among the best movies and podcasts I have listened to and watched, I would say. So they really go deep into these issues. They explain it so well. You can also show it to people who are not familiar with technology. They will also understand the problems. And in the podcast, they go deep into each of these issues and also show, for example, how the attention crisis relates to climate change and a lot of other things, how it relates to Las Vegas, the design patterns there are very similar to Doomscrolling, for example. And finally, the scene will keep growing and design and platform decisions you make now will matter for the future. Thank you very much. Okay. Thank you very much for the interesting talk. I think we can start straight away with the questions. We have quite a number of them. Sure. First of all, the first one is a comment, a person pointing out that he's unfortunate that most parents in these days don't care about what their children consume. They just throw a tablet at them and talk about these. Yes. It's a problem because there are things like YouTube kids, for example, and as parents, you probably assume, yeah, it's fine and you just give it to your kid and then they consume weird content and it has the same problems that it starts amplifying towards more extreme content even for kids. There are whole blog posts about this. So let's move to the next question, I think. Thanks. A person is asking if there is any published research studies that provide indications for the phenomenon where Silicon Valley employees forbid their children to use their own technology? Yes. So there is this website called Ledger of Harms, which I mentioned in the talk. It's ledger.humanetech.com and it has a whole lot of quotes to each of these sections and one is do unto others, which is exactly this point that people working in tech companies in Silicon Valley do not want their children to use their tech. And there's, for example, Steve Jobs, which everybody knows, right? He says, yeah, we limit how much technology our kids use at home, for example. And there are more quotes from people who worked at Facebook or still worked there, I think. Just look it up on the website. So there is definitely some, I mean, not studies, but quotes from people who work there, which is, I think, scientifically enough. Thanks. The next question is from me. You mentioned a platform like MasterDone and it implements the same user pattern as Twitter, like the infinite scroll. You have any thoughts about this? Yes. So I think MasterDone is much better than Twitter just by the fact that it's free software, of course, and that it's decentralized and a few other things. But I do not like personally that there's so much copied from Twitter. I think Twitter has a horrible user interface. Like it wasn't so long ago that I used it for the first time. And it's really hard to just understand how it works, that you have to click on dates to see the comments. It's so weird. And this is unfortunately copied into MasterDone. So I would prefer if it was more user-friendly and more targeted towards giving the user what they probably need instead of keeping them also by accident, probably to look at the site for the longest time. So and regarding infinite scrolling in particular, I think it would be better if it was opt-in to be honest. I have scrolled to a few messages and then displayed, do you want to go to the next page or not? And you could have in the settings that you enable infinite scrolling. And apps could also do that, by the way. Next. Thank you. One person is disagreeing on the browsing approach in application stores being really harmful and claims that it could be interesting to use it to find your applications and similar. And wondering if it would make sense to think in terms of letting users browse or search for something specific. There's no answer that in such a short time.
As FOSS on mobile community, let's do our part to fix the many negative effects of social media and its hostile design patterns. We could become the prime example of how to treat users with respect. To not only give them control over their phone, but also over their attention. From design choices in the operating systems and apps to the platforms we choose to communicate about development.
10.5446/56933 (DOI)
Hello everyone and welcome to my first M2022 talk about common voice. I have been contributing to Mozilla for the past 13 years. I have contributed to several different projects such as Firefox, community building, and the last few years I have been focusing on the common voice project. Before we fully jump in, I want to give a quick outline on what we are going to talk about today. First I am going to give you an introduction on what common voice actually is. I am also going to show you a quick demo so you understand the full context and then we will jump into how we collect sentences for common voice. That part will be split in three parts. There are three ways to contribute sentences and I will go into all of them and then at the end I will also show you how you can contribute to that as well. We are always looking for new contributors so if you have any questions regarding contributing feel free to drop them into the matrix chat and then in the Q&A section we can dive deeper into those questions. The same goes for any other platform questions. If something is not clear in my talk feel free to ask the question and we can answer that hopefully. Now let's get into what common voice is. Common voice is a project led by the Mozilla Foundation. Its goal is to help make voice contribution open and accessible for all. There are several voice recognition datasets out there. However the big ones for example from Google or from Amazon are not open at all. They have the data however you won't ever be able to actually access those datasets to create your own machine learning models or even use the existing models from them. Therefore it's crucial that there is an effort to create open datasets that everyone can use and not only for languages that have millions of speakers for example English or Spanish but also for languages that are not that well represented as it is right now. The dataset of all the recordings and the sentences with it are published under a Creative Commons Zero license so anyone who wants to use it can actually use it. As of mid of January there are 159 languages on common voice. These are either already in progress such as 88 languages where people can actively contribute their recordings and the remaining languages are either in the website translation step or gathering the initial set of sentences to make sure that the contribution can start. If you go to commonvoicemosella.org you can see the website. This is the language subsection just to quickly show you how many languages there are. Some have been actively contributed to in quite large numbers. However there are also a lot of small languages that are slowly but steadily gaining traction and get their recordings in. If you want to contribute feel free to check out the language subsection page to find the language you speak and see whether it's in progress or already fully able to contribute. You can also find the dataset on the dataset subsection where you can also find the difference that's on how many hours are already validated how many number of voices there are and you can download it and use it for your needs. There are two sections to contribute to. The first one is recording your voice. You get shown a sentence and then the goal is to click on the recording button in the middle of the screen and then speak out that sentence out loud. After you have done five sentences those get submitted and then equally important is the validation. You can click on listen on the top and then you can listen to other people's recordings and say whether what you hear actually matches what is shown on the screen. If there are enough validations they end up in the dataset. Additionally to that you can also report a sentence if it has mistakes in it, if there is a typo for example or even if it's an inappropriate sentence. We try to make sure that that doesn't happen but I don't think we can catch all of them so please report if you see something. You can also contribute without logging in. However your profile when you log in also has sections to fill out your accent for example which gives more data to the dataset so if you are willing to log in please do so. Eventually in the dataset there is no email or anything attached to you so from a privacy perspective that is okay. However it's totally fine to also do it anonymously by not logging in and in the end it's up to you but if you can please do so. Now that was a very quick overview of the Common Voice website. As I have already said the dataset is available on the website. You can see the statistics, you can download it and there is a new release every six months. Now for this to work for these sentences here to show up we need to actually have those sentences. How do we gather them? That's the topic of this talk and there are several ways on how we can do that. The first approach is to use the sentence collector as you see here. It is a website where you can either collect sentences or review sentences. Once again we take the approach of contributing by creating or contributing by reviewing. You can go to your profile, you can select which language you want to contribute to. In my case for now that's English. And then you can go to the Add section. In case you have multiple languages in your profile you can select which language you want to contribute here. And then we can add a public domain sentence. Important it needs to be public domain because otherwise we're not allowed to use it in the dataset. So for example here we can say this is an example sentence for my talk. And we can add a second one for example this is for false them. We also require the source. In this case I have written it myself so I can say written myself. And I confirm that these are okay for public domain. When I submit I get an overview. I can already review the sentences. That is mostly meant in case where I have a dataset from an old book that is copyright free by now. For example I can already click on review, I can review the different sentences, make sure that non-useful sentences are getting rejected already at this step. In that case I can say confirm. And now we can see one sentence failed. When we scroll down we can see which validations failed. In this case this is for false them failed. This is because we do not allow any abbreviations. In this case we know how to pronounce false them. I doubt that everyone on the planet Earth actually knows that. In this case you could also pronounce it FOS DEM. And the same goes for any other acronyms that are not fully clear. Therefore we avoid abbreviations and acronyms altogether. What you can do in this step is you can copy the sentence, put it back into the sentence box, adjust it to not include an abbreviation and go from there. The other sentence was submitted. When we go to review we see an interesting sentence. Generally it is favorable to only include sentences as single words. We could just simply drop a full dictionary. However when recording on the common voice platform that's simply not interesting. In this case it says help. I am just going to reject that for now. Then we can see my sentence here. This is an example for my talk. In that case I would approve it. And we're done with the review. English is heavily contributed to so you won't find a lot of sentences to review. But I think that was a good example. You can also find the review criteria here in case you're not sure on whether to approve something or not. If you're not sure you can also click on the skip button and leave that decision to somebody else. You can also see if there are any rejected sentences. Sentences you have previously submitted but somebody else rejected. Those were test sentences for me to test something in production. And therefore I already rejected those because those are English for languages that do not use English. Therefore rejected. You can also see which sentences you have submitted in general. As you can see that's actually not too many. I'm mostly involved with the technical side of this platform and let others come up with their own sentences. If you need to you can also select them and click on delete selected sentences. That could be for example if you figure out that the source was not appropriate and you can delete those sentences yourself. And then finally we also have some statistics. For example for English we have more than 50,000 sentences in the sentence collector. 53 of those are already validated. That means they had been exported to Common Voice and therefore are part of the dataset. In total we have 4.7 million sentences in 134 languages. Which is quite an extensive number but given how many sentences we actually need to create hours and hours of recordings we need to keep going. As I said the sentences need to be copyright free preferably CC0. And now I want to go into a bit more detail on how the sentence collector works. The sentence collector is a React frontend using Redux for state management and everything is powered by Node.js Express server. That is also connected to the database so that part is more or less straightforward. The interesting part is how we actually export to Common Voice once the sentences are approved. The sentence collector is currently living in a separate repository. That means that we somehow need to get those sentences into the Common Voice repository. As it works right now there is a subfolder in the Common Voice repository that has text files in it. And these then get deployed into the Common Voice database on website deployments. For that to work we have a GitHub action which runs every week on Wednesday early morning CUTC time zone. And that runs an export script which fetches all the approved sentences for all the enabled languages from the backend server. And then creates the text file, sentence collector.txt, pushes it to the Common Voice repository. And then on the next deployment that will be put into the Common Voice database. Then to give you a quick overview on how that works. It's a GitHub action with quite a few steps. We need to clone the Common Voice repository and then the export is mostly the interesting things. We go through every language. We also print some stats for debugging purposes on how many sentences we got. Then going to the sentence extractor. We have figured out that using the sentence collector is not super scalable. We need a lot of contributors that create their own sentences or find sources that they can use. So what we have is we can export three sentences per article from Wikipedia. Which scales a lot better than individual contributions. We also have some other sources available such as Wikisource. And these extractions we can automate. That's what the sentence extractor is for. It is based on language specific rule files. Which I can show you now. It's a Toml file with pre-specified keys. For example here we have the replacements. As mentioned previously we don't want any abbreviations. That also includes things like Mr. and Miss. While that might be easy to pronounce we decided to just replace that with Mr. and Miss as a word. And then we go from there. There are also other configurations. For example how many minimum words, how many maximum words. And also whether it needs to start with an uppercase letter or not. Then we have a regex which defines which symbols and letters are allowed. We also have configurations on which symbols need to match together. For example on line 31 we have the lower quote symbol and the uppercode symbol. So whenever we encounter a lower quote we also expect an uppercode. And if that is not the case we check the sentence. And then we also have the different abbreviation patterns for English that is fairly straightforward. In other languages there are more elaborate rules on detecting those. Once that rules file is done a pull request can be created. For example to show you an example here. We have yet another GitHub action that runs on every push-to-pull request. And what we have created is an easy way to validate your rules with that. When you open the check you can see an extraction file uploaded as an artifact. You can download that and you will get a few thousand sentences from the Wikipedia extract to verify that your rules apply correctly. That allows you to fix mistakes in the config quite easily. You can push your changes back up to GitHub into the pull request and look at the extraction again to see if it's fixed. All the config files are also documented in the readme. So every single key you can use in a config file is documented. And if there are any questions while coming up with a pull request feel free to open an issue. We want to keep that as extensive as possible. One interesting thing is we are using Rust for this tool. However the Rust ecosystem is not fully perfect in terms of libraries you can use for natural language processing. We started with using Rustpunk to split the sentences. We basically get the full text and then we split out the different sentences. That didn't work out perfectly so there is now also a way to use a small Python script together with, for example, an LTK which splits sentences way better. That was a very interesting project I worked on. It's as of right now quite hacky to do inline Python. However that enables a lot of other things that could be done. In theory now in hindsight it probably was not the best idea to start with Rust for this one. I think if we had started with Python from the start for this tool we could have done a few things way easier. That being said I don't think we need to rewrite it right now but maybe in a few years or depending on what else we want to implement in this tool might be beneficial. And the third method to get sentences into common voice are bulk uploads. If you find a source that has a lot of sentences to upload, we are talking a few thousands, 50k or even more upwards. We also provide the possibility to create a direct pull request to the common voice repository. That basically makes sure that for very large datasets we don't need to push them into sentence collector because reviewing 60, 70, 80,000 sentences in the sentence collector is not fully suitable. Mostly given when they are coming from the same source because then we can also just do a review of a statistical sample that is by multiple reviewers and if that is below the ever margin we are good to go and the pull request can be merged as well. For more info on that you can look at the common voice community playbook linked here. The various slides are all attached to the talk and there are also more examples and more information in the playbook for the full contributions that are possible to come to voice. Those were the three currently available contributions for sentences. I want to quickly go into details on how you can contribute generally. If you speak a language that is not yet covered in the sentence extractor, we would appreciate if you can take time and create the rules definition for that language and then we can make sure that we can also extract from Wikipedia. Another opportunity to contribute is to the sentence collector. Feel free to create pull requests. There are some react parts that I previously refactored but didn't go fully through with it. There are still some components for example that do more than they should be doing. Have a look at the code, create an issue if you find something that could be improved and we can take the discussion there. Any help there is appreciated. There are also some open issues that can be tackled right away. Another contribution that would be super valuable is to help us find good public domain sentences that could be datasets that cover multiple languages or could also be for one specific language. Last but not least, contributing your voice to record sentences would also be greatly appreciated. That being said, please don't forget to also listen to other people's clips and validate them. We have a lot of recordings but all of these recordings also need to be validated by other people as well. If you need any help, if you want to coordinate a new language or coordinate existing communities, head over to our discourse forum linked here on this slide and create a new topic for anything that might come up. For today, I'm also hanging out in the Matrix Q&A later after this talk and feel free to drop any questions there as well and I will try to help you out. I also hang out on the Common Voice Matrix channel on the Mozilla server so feel free to drop by there as well. If you have any questions, give us your input. We greatly appreciate that. And now, thank you for your attention. I'm looking forward to the Q&A. Hope to see a lot of questions. Thank you. And we are back live. Congratulations. Mike has such a wonderful and insightful presentation. I did learn a lot. So personally, I was not aware of some of the things that are behind Common Voice and I'm really glad that you brought this topic in first. How are you feeling today? I'm feeling well. I am a little bit sorry that the other person is a little bit like... It's fine. We all have like sound on louder and I think it was decent. Let's get some questions because we do have some already popping in. If not, do enter the Mozilla Dev Room and add your question and when I vote, it will appear here too and I'll do it. I started with the last one but it's out on the top now. The CCZ requirements are very strict one, especially for languages with small speaker base. How can these be collected? Yes, that is indeed a very strict requirement. However, I forgot to mention that in the talk, there is a process to ask other people if they are willing to actually release it under CCZero. There is basically a legal document that contributors can send to, for example, news organizations and if they're willing to sign that document, then we can use those. I will later on post a link to that process in the Dev Room as well as in this channel here. That would be cool. I'll be even tweeted out for those who rejoined and left the call or not in the Dev Room. Okay, let's take one about the data sets. Which format are the data sets stored? To be honest, I am not fully working on the data set. I'm really mostly on the topics that I discussed in the talk. Then the metadata is stored in text files. Those are as far as I know, separate data. And for the audio, I would honestly need to double check. I think also people can do ask questions in the project. You might have mentioned at the end, but I was already in the Zoom preparing. Is there where is the place for the team to be contacted? If so, should they just use GitHub and logon issue there? Or is there a metric channel upon? There is certainly the opportunity to create an issue on GitHub. However, we have a discourse forum at discourse, the Mozilla.org. We have a common voice section there. That's probably the best way to get in contact with the community. There is also a metrics channel that is in the Mozilla.org metric server. It's called common voice. That should be easily findable as well. Otherwise, that metrics channel is also linked in the documentation on the GitHub repository. Super cool. Let's get a question about the users because we saw this project that needs going on for a while now, a few years. And Bullen was asking, are there any public projects that have successfully used common voice data sets? The one that comes to mind is Gokki, which has a lot of language models that are also based on the common voice data set. I will also tweet that link out. That will be cool. Thanks so much for this.
Common Voice is a project to help make voice recognition open and accessible to everyone. To create this data set Common Voice allows volunteers to record defined sentences to contribute their voice. A good data set needs a lot of recordings, and therefore we need to have a lot of sentences to be read out aloud. In this talk Michael will introduce the audience to several ways we are collecting these sentences and goes into more technical detail for these mechanisms. This talk will also feature an intro to Common Voice at the beginning.
10.5446/56934 (DOI)
Welcome to the Mozilla developer room here at Fostam or rather my room here in Vienna as it's online once again and not in Brussels. This talk is about suggestions for a stronger Mozilla community, about my personal thoughts and ideas for possible improvements in this community. I'm Robert Kaiser and the slides are already up at slides.kyro.et slash Fostam 2022 or the first entry at slides.kyro.et. For myself, my nickname is Cairo, you will find that in the Mozilla community hopefully well known as well as my personal stuff. So my email is Cairo at Cairo.et and my personal website home.kyro.et where you find a number of other ways to contact me. As I said, I'm based in Vienna, Austria and I'm a Mozilla rep and tech speaker. I have been in the Mozilla community for more than 20 years now and have seen quite a lot there. We will talk about that a little bit but I'm not very active on social networks if you want to contact me. That said, on Matrix and Telegram you can find me very well, Cairo on the Mozilla matrix and I also put links to the Mozilla community page, LinkedIn and Github pages for me on the slides. First of all, this talk is personal. Most of this talk is about personal opinions, suggestions and ideas from myself. Nothing of that is coordinated or even endorsed by Mozilla Foundation or any of its subsidiaries or any staff. This is all my personal opinion, what I'm thinking about it. That said, I'm very happy about comments and about discussions on those topics but let's please keep it civil, follow the Mozilla community participation guidelines as this is all Mozilla community stuff and so those guidelines apply. I hope there will be discussions and comments but please follow those guidelines and keep it open for everybody and friendly. Let's start off with my view of the Mozilla community. It's centered around the Mozilla manifesto which I hope everybody around here coming into this dev room actually knows, otherwise you can look it up on the Mozilla website and around the open web. That includes the official organization like Mozilla Foundation, Cooperation and so on and also supports them. It's supportive of their views. If you go against those organizations, I don't really consider you part of the community but it doesn't mean that you need to do everything they say or only follow those things that come from the official organizations because the community is much larger than the official projects and products. Otherwise it could not be a really large movement. It's powered and driven by volunteers. This is not staff making up the community, staff is paid for their own stuff. Volunteers are doing it because they're enthusiastic and spreading the message and they want Mozilla manifesto and the open web to be the guidelines for technology for a large part of this world and so volunteers are driving that. Let's add a little bit of a view of the history. If you want to know more about Mozilla history, I did a talk about that last foster. You can look that up and watch that talk. I hope that's an interesting one as well. Mozilla was one of the earliest large friend open source projects. There was the Linux kernel, there was KDE and GNOME and Apache and I think OpenOffice started around a similar time but there were not a lot of large projects out there, very few. As a browser project and around the web it was very close to what users were doing and so a strong growing group of people formed around this especially because this was around code that was pretty unfinished but did new things, rendering user interfaces of applications with web technologies was very new back then and especially that spawned a lot of side projects. It was easy to come in and fill the gaps where the code was not complete but over time many other competing open source projects came out, competing in the sense that they were competing for volunteers. You suddenly had a lot of choice and more and more choice of where to volunteer your time. Also when Firefox came along development was tightened up quite a lot to make it fit for mass usage for everyday user out there and also when staff came in project management came in to make the development more efficient, all that does not gel that well with volunteers and so people went to a number of those other competing projects where it was easier to get in. More people also went to the outskirts of what Mozilla was doing which was a lot of technology experiments. Some of those were incorporated in Firefox and then they were tightly controlled again, some of those were as experiments go were stopped again and then people were disappointed that something they were enthusiastic about was suddenly not driven forward which also made them move to other projects. Now there are good groups in some areas still there at Mozilla. For example in the support area or in the documentation MDN which grew a lot larger than Mozilla itself, localization, some local groups like in India we have some groups that are strongly needed together and therefore are still going strong but it feels very splittered and unconnected. You often don't know what's going on in this community in other places and it's also hard to motivate people to engage because some of those have been disappointed by experiments that were left earlier on and some of those just don't necessarily go with Mozilla because there's so many other things out there that they can do. So where to go from here? Again to reiterate what's following is personal suggestions and ideas that are not endorsed by anybody else, they're all food for thought and not fully planned out. I don't have all the answers either, I have a few ideas and I welcome civil discussions respecting the CPG but I really want us to talk about it and discuss about it. So let's get to those ideas. The first one I'm calling independence, don't put too much in the single word here, what I mean with this is do not depend or rely on Mozilla support or official programs that Mozilla is doing. We should do those things that we're interested in, that we have fun with no matter if Mozilla Foundation or cooperation actually pushes those things. It's about us as the community having fun. That said, it still has to be in support of the manifesto and the open web and not be contradiction to official activities. So if Mozilla pushes for less centralization on social media then it's not a good idea to push for more centralization there, which we wouldn't do but you see what I mean, don't contradict what Mozilla is doing necessarily but let's enrich it and be broader and not depend on Mozilla saying what we have to do. Let's do what we think is good for supporting the open web and the manifesto and have fun with it. And when you're enthusiastic and have fun with it, it's much easier to engage other people around you and bring more people into this community. Now for a very concrete topic, let's stay on the ball on WebXR and mixed reality. Mozilla has driven this topic for a while and then went a little back on it but there are for example Mozilla reality headsets out there. I recently bought a Vive Focus 3 that has Firefox reality installed as the default and there are other headsets out there that have the same. Those typically come with app stores that do not have a lot of apps available but they have the web available and with that we can show them the experiences that the web can give us, multi-device experience from the headset to the desktop to mobile and so on. We can do meetings in hubs and other mid-diverse WebXR applications. Let's use Mozilla hubs, it's there and even now that XR is beyond the hype cycle that it's not talked about everywhere all the time, it's establishing itself as a major tech. There are reasons why Facebook has renamed themselves Smita and is pushing this topic very hard because this is a technology that is establishing itself in the major cycle of technology and if we stay on it we can also make sure that the web plays a role in there and plays a large role in there as an open technology as opposed to those closed tightly controlled app stores that we usually see on those devices right now. Let's also not forget about spin-offs, about projects that were at Mozilla and have gone off. Things like web things, Coquic, iOS and so on are still around and they are still part of the community. Let's recognize that, let's make sure they feel that they are still part of the community and we feel they are part of the community as well that makes them feel like they are part of a larger thing and makes us feel like we're part of a larger thing that helps us all. Let's stay connected with them. Let's work with them to push their technology forward, to keep the Mozilla spirit and the free and open source spirit alive there. If we all work together in the community and not forget that those groups are still doing things that are aligned with us, that makes us all stronger, that helps us all. That said, let's not forget about new technologies that are coming along, let's investigate them. We can be the eyes and ears of this movement and learn about interesting new technologies before even the hype starts. If the official product organization makes a move to any new technology, that sets a sign so they need to be careful with that. If we, a few people out of the community, look into this, look into that, play with this, play with that, try to connect it with the web, that's much faster. That doesn't set a sign per se and we already learn about it even before the larger product organization can move on it. That said, for those technologies that are already in the hype, let's look behind the hype at the actual tech. There are a number of people who are overselling everything that is in the hype, who want to profit from it, who want to scam you with something around it. Or on the other hand, that spread fear about this technology because their field may be disrupted by those technologies. Let's look behind it at the actual technology and don't prejudge from a conservative point of view. For more than 20 years of this community, we happen to be the conservative ones at times. Let's not be afraid of change. Let's look at what those new technologies can bring into the web to make the web live on and not be superseded by new technologies but be included in those new technologies. As FOSSTEM is usually in Brussels, I included the Atomium here as it was a symbol for new technologies back when it was created at least. That said, when we are talking about new technologies, one of the current problems of the internet are those centralized silos, especially in social media but also in other areas. The alphas that we have discord with at times. Let's move out of those centralized silos and lobby for interoperability and open standards. The EU, for example, is working on even laws for interoperability. Let's help push that. Let's push for that in other countries as well or in other areas of the world. Let's also push that outside of politics but just when it comes to technology. Let's push that to those organizations that we are a part of. Let's push also metrics and other similar systems that are not centralized. I'm using metrics as a symbol here because Mozilla community communications are on metrics but there's also a lot of others like Chitzi and whatever. You know what I mean. Those systems that are not controlled by one of those large companies or not one single closed source company but those which are open and not centralized to one place. Let's avoid centralized services. Let's demand alternatives. Let's try to bring more usage of those into our circles. When I talk about decentralization, there's one topic that is coming up a lot in this area and that is also close to my heart and that is blockchains. I'm especially talking about community blockchains and Web3 here. I don't see really a decentralized if one company is controlling the whole blockchain somewhere but there are a few out there that are driven by a larger community. Let's also look beyond the hype. Let's openly approach and learn. Let's not be distracted by the scams and overhyping and whatever is going on there. Also not by the fear, uncertainty and doubt that some are spreading around it. Let's look at the actual technology, at the actual people in those communities. There are many honest and open minded players in that space. I work in that space. I see a lot of that. And especially the Ethereum community shares a lot of values with Mozilla. They may not follow one-to-one the Mozilla manifesto because they're not the Mozilla community but a lot of things that they're doing are very similar to what we have in the manifesto for example. And the Web3 technologies that are connected with this community, I'm not so happy with versioning the Web but this is the phrase that people are using, the term that people are using for this. There are a number of interesting things in there that we have thought about, that they have thought about, that they have found some kind of solution for like logging for websites in a decentralized way that is not controlled by Alpha and Meta and so on. They have a public key-based login system that is working. It may not be the nicest for normal users but it's something we can base things on. And this is even without using the blockchain itself, this is just a technology that enables communicating with it and that doesn't even need to be used with the blockchain. Let's say there is a big ecosystem out there with financial but also non-financial use cases. You usually hear about the financial ones but there are a lot of others out there as well. Let's sincerely go there, openly talk to them and see what's there in terms of technology. As a side note, when it comes to Ethereum especially, the carbon or energy problem that is discussed a lot will be gone soon. For more than a year, the base technology for this has been running and now in the next few months the existing chain will be moved on to this and then this very expensive mining or proof of work will be gone from Ethereum. So don't be distracted by this problem when you think about especially the Ethereum community which is probably now it is the largest blockchain community that is out there and in my opinion the most open. But let's go to something completely different. Not on the technology side but really on the side of how we organize our community. Communication has been getting harder because there are a lot of splittered project specific channels as I said the whole community feels quite a bit splittered nowadays and communication feels the same way. I think we need some common beacon, some connector that brings us together that makes us feel like we're in one thing and not in a very fragmented space here. This could be at least this is one idea, this could be some kind of weekly or so condensed news package from all over the community. It is not too long that you can consume in a relatively short time and feel like there's a lot of things going on around you and you feel more connected to this community. This could perhaps even be in a podcast form which I think would be a really good instrument to hear voices and to feel like those voices are all around you and bring you this connection to all those people in the community and pull the community together. On the other hand I know a podcast has a lot of work. That said if people are willing to work on things like that on this common beacon on this thing that pulls us together I'm happy to be involved there as well. Let's connect, let's make something happen, let's make this feel like a more connected space. And then there's the other direction. Let's go out there, let's go to events and talk about it, talk about this community, talk about the manifesto and the open web and everything around it. Everything that is connected to it that we are doing in this community. Let's bring this out there no matter if it's online or offline conventions. The online ones are here to stay I'm sure. But offline ones are starting to happen again and I hope more and more in the future. And even if there's not an official TechSpeakers program that maybe pays for us to go to an event and talk about the open web and the Mozilla community and the Mozilla manifesto. There are events we can still go to. Let's do that. Let's tie our activities back to the Mozilla core and let's stand proud at those conferences, at those events wherever we can go. Let's stand proud with Mozilla and with the whole community so that we really feel like this is a proud group that we are part of. Let's spread the word and let's make this a really great experience again. With that I hope that we will feel like as connected as this group that is on this picture. But with the community it is much larger than the group that you see on this picture. I know this was a very enthusiastic group because I was standing somewhere in the back probably behind the thank you sign here there. If you want to connect to me my personal homepage is at home.caru.at as I said before. And you will find all the ways to contact me there as well. As I said also on matrix. And as a reminder our core will always be Mozilla.org has been in the past will be in the future. That will not change even if we are much larger than the official organizations behind it. Thank you.
As one of the earliest large FLOSS projects, Mozilla had a strong and growing community of volunteer contributors for a long time. Then, a lot of factors leading up to today led to the environment changing very significantly, and today's community has some good groups in some areas, but not the kind of connected movement that existed in those earlier times. The speaker has been part of all of that development, starting off as a volunteer very early in the project, working on Mozilla staff for a few years in between, and still being part of the volunteer community in recent years. From that point of view, he'll bring up some ideas and suggestions on how this community can become stronger and grow again, so that a significant voice for the Open Web and the Mozilla Manifesto will hopefully be out there also in the future.
10.5446/56937 (DOI)
Hello, Fostem. I'm Matthias Leuvel and today I want to talk to you about profiling in the cloud-native era. Quickly about me, I'm a senior software engineer at Polar Thickness. I maintain various open source projects with others such as Parker, Thanos, Prometheus, Prometheus Operator and Pura. Pura is a pretty cool side project of mine. I want to quickly shout out that I'm working on with a designer to make service-level objectives more manageable and easily accessible. If you want to reach out on any of the social medias, I'm always at MetrelMATSA. So, profiling. What is profiling? Profiling is really old. Profiling was first introduced in the 1970s and even earlier probably, but that's what we found traces of. It's been used ever since to dynamically analyze programs and measure the resource consumptions of the space or the memory, the time complexity of a program, so the CPU time and the usage of instructions, as well as the frequency and duration of function calls to know what our program is spending most of its time doing. There are two different ways of profiling and the first one might be the simplest one, but it also comes with the highest overhead and that is tracing. Tracing records each and every event constantly in our program, but as I said, the cost is pretty crazy and the amount of data we collect grows way too quickly to do anything meaningful with that over a long period of time. But instead, we also have something called sample profiling and that means that for a certain duration, so for example, for 10 seconds, we periodically observe function calls and stack traces. So every second, for example, let's say 100 times, we record what we are seeing and that comes with a pretty low overhead as you can see right here. So we can do this almost always. What types of profiles can we create? So to be more specific, we can create a CPU profile that tells us where the CPU or where our program is spending CPU time. We can create memory or heap profiles. So where is our program holding the most memory and then which allocation profiles are telling us where are functions allocating the most memory? So slightly different, but super meaningful as well. And then there are IO profiles and these tell us where functions do many network requests or which functions are writing or reading the most from disk, things like that. So why are we doing this? First of all, to reduce to improve performance and that is to reduce the latency, for example, of our servers. And that could mean that we, I don't know, like get our servers from one second tail latency down to 100 milliseconds and then our users will be more happy and obviously we need to make money so they will be able to spend, they will be happier and then maybe spend more money on our platforms. We can also save money. That means that our program could do the same task. But if we improve the overall performance of our programs, we can maybe turn off like 20% of our machines. And that's exactly what some companies and organizations have done. They looked at their programs and where they spent most of their time and memory and then they tried to optimize that and they could save up to 30% of their resources. To get more specific again, how can we profile Go programs? Go comes with a tool called Pprof and Pprof descends from the Google performance tools and it's an open standard. It is described in a protobuf file and it is an open standard and you can use it with Go but there is also support by other languages. The most important aspect for Go is it is built into the Go runtime and it's part of the tool that Go, that are shipped with Go. Again, yeah, it is open standard and there are many languages supporting this. The Pperformance itself is not too complicated. Once you wrap your head around it, on the left hand side you can see a profile type and then that profile type is kind of like collection of various other types. So first of all, there's something called a mapping and you could also rename that to binary. So if your program is just one binary, there's most likely just one mapping and then there are certain programs that have many binary or a couple of binaries. So this will be taking into account and then every stack trace of your program will create a sample and these samples point to location IDs and a location is an address of that stack trace and that points sometimes to a line and then that line points to a function. So that is how we can represent profiles in memory with Pprof. Pprof again supports many other languages but Go obviously is kind of like the best supported one but there's support for other languages. Some are better supported. Some not too great yet but the community kind of works together on improving this. On here we can see how the code on the left hand side is folded into the stack traces that are then stored in Pprof. The functions on the left hand side are calling each other and once they are folded the main function is represented on the very right and then the leaf function is on the left and we can kind of like use these as locations in the Pprof format. And yeah, like the folder stack traces are then converted or transformed into these samples and then we get a CPU profile in this case for example. Now to add Pprof to a Go program you can import the NetHTTPPROF package from the standard library and once you've done that you can register a couple of HTTP endpoints and in this case you then have a router on port 8080 that you can actually query with the Go to a Pprof command and point it to the right URL and then that would actually open another web server locally where you can look at different visualizations such as this icicle graph or flame graph and here we for example can see that we have some HTTP server that has a generate random text apparently and that takes up quite some time. A different way of visualizing this in Pprof is a call graph and in here you can see we are looking at a memory profile and the BAPIO new writer size for example is allocating or like actually having lots of memory allocated on the heap. Profiling is an incredible tool but it doesn't do everything we want so the problem with just profiling and we come to continue to profile profiling in a bit but just profiling is that it's just a snapshot of like a single profile is just like a snapshot of where our program is in that specific point in time and it is quite manual we need to go to a machine download that profile and then we can do something with it and it's not automated at all. So that's why we have continuous profiling. It was first popularized by the Google wide profiling paper and now there are a couple of really awesome open source projects out there. So why would we do this? First of all development is in production and once we have something shipped into production there can still be bugs and there can still be things that we need to improve. The load might be more and we might be seeing different artifacts in production so we still want to be able to profile in production and the data and context over time is really important. So let's say you have different versions or just over time there are more users and then there are less users on your platform right? All of this taken into account kind of matters. So when is continuous profiling useful? It is useful to save money. We can look at what functions and what where the processes are spending the time and we can try to reduce these. We can understand the differences of our processes over time and we compare even across versions for example like new version, why is that version slower or where is it spending more memory and then we can also use it to understand incidents that already happened but we might be able to kind of like really like time travel and look at our process right before it right before it crashed or the incident happened. So a very infamous example is this where you can see that we have one gigabyte of memory allocated and then all of a sudden the program crashes and it starts creeping up again and then it crashes again and that is the infamous oink right? So out of memory kill and the kernel will just terminate the program and it needs to restart. Okay so how does continuous profiling work? We use Pprof and Pprof creates sample profiles and we want to sample every so often and then with pretty low overhead due to the sample we hope to get the profiles right before oink as for example and we instead of doing that by hand we do that automatically every every few seconds. So once we are like scraping or ingesting these profiles we want to index them by the metadata so we can search for certain containers for example and then once we index and start these profiles we also want to be able to query them in a meaningful way that would unlock new workflows that were impossible before. So giving a bit more concrete example again here we are trying to do continuous profiling for key profiles for allocation profiles and CPU profiles and every 10 seconds with a bit of lag the continuous profiling project would reach out to our Pprof endpoint and collect these different profile types. And what's really cool is it is possible to profile in production all the time. The overhead it's there but it's kind of negligible and you can do it in production as it's pretty good to to really get down to root causes and and optimizations. One special feature that we wanted to shout out is profile guided optimizations. So let's imagine you over time observe your program and you can merge together all these profiles and we get to that like we get to merging profiles and what that means but we kind of like here we look at what is our profile doing over hours of time and then we can tell that to a Go compiler and the Go compiler can take that information and really optimize our program when compiling already. So that is really cool and we are looking forward to having these capabilities and go and I'm pretty sure these are in Rust already so we can somehow manage to integrate there as well. That would be crazy good. Okay so now I want to talk to you about Parker. Parker is our open source project for continuous profiling. Here you can see a quick overview of the Parker project and the Parker server and that is responsible similar to what Prometheus does for either scraping or ingesting profiles and then storing them in a time series database, indexing them and having a query engine and it also has a GRPC based web interface where you can visualize these profiles. For scraping these people of endpoints we need to somehow discover these processes and Parker supports either a Kubernetes service discovery which is actually the Prometheus one so if that works for you with Prometheus that should work for you with Parker. We also support a static or file service discovery where you can basically write into a static configuration the endpoints to scrape from. And a really cool project that Parker also has is the Parker agent. The Parker agent uses eBPF to create profiles and then these profiles are sent via GRPC to Parker and that's where you can visualize them. Parker is an open source project hosted on GitHub, has a neutral governance and contributions are welcome. It is inspired by Prometheus. It is a single statically linked binary. It uses the same multi-dimensional label model that Prometheus has. As I already mentioned it uses the same services discovery as Prometheus and it has a built-in storage which currently is in memory only but we want to add persistent storage eventually. And because we have a Parker agent it is super easy to integrate. A single profiler using eBPF automatically discovers targets from Kubernetes or system D across the entire infrastructure with very low overhead. It supports C, C++, RUT, GO and more and we are always constantly trying to improve the support for these languages and adding more. Just like Parker, the Parker agent is open source. It discovers C groups version 1 and 2 on the current system and it uses eBPF to create these profiles. It understands where CPU memory IO resources are being spent and right now we have CPU profiles. It captures the current stack trace X amount of times per second and creates a profile from that. The really cool thing is you don't need to change any of your code for creating these profiles. A high-level overview of how the Parker agent creates these profiles. So first of all it discovers the C group from the target provider. It then loads and attaches a BPF program to your C group. It waits 10 seconds and then reads the BPF map from the current. It transforms these maps into P-Pro format and then symbolizes the debug symbols on the fly. All of that is then sent to the Parker server and the process is repeated every so often. Let's talk about visualizations. Parker's web-based UI looks like this and you can select a profile type that you want to take a look at on the drop-down and once you've done that, in this case a memory in use bytes profile, you can click on a specific point in time, select that and it will give you the profile as an icicle graph. The same is true for the CPU profile and in this specific example we had various profiles merged into a single profile. So the underlying data here is actually many profiles but you get back one very specific profile where everything is combined and summed up for you. So you get like one hour worth of profiles into one profile. And then the other really interesting profile type is a diff profile and that works by clicking on different points in time for the same binary for example, for the same process and then you can see in green where less I think memory is in use and then in red where more memory is in use with each of these stack traces. So that will tell you where the memory is actually allocated. Let's look at Parker in action and this instance is running on Minicube and all the data is coming from the Parker agent with eBPF. First of all let's select our profile type and we can see we want to query the last hour. Now we can see all the time series basically that Parker has ingested for these profiles. We can hover over these and we can drill down into different labels. So for example, let's go into the namespace equals cube system. Up here we can see the first series and that is actually the Kubernetes API server and if we click on this profile we can scroll down and we can see the Kubernetes API servers CPU profile without us having ever touched the API server. The same is true if we were to for example click on this XED Parker demo. So now we're seeing a profile of the XED running in the Minicube cluster. This is for go but it was instrumented with eBPF. Let's take a look at the namespace Parker actually and next to Parker and the Parker agent I've deployed several other languages. Rust sadly isn't yet too functional but hopefully by the time you are seeing this we might have had the chance to improve this. Instead let's look at Node.js and we can actually use a RackX metro to look at all the Node.js applications deployed to this cluster and if we for example click on this one we will get a Node.js profile. Awesome the profile has been loaded. Let's scroll down and we can see next to the Node.js binary we can see also the just in time part of our Node.js application and in here we can see the main JavaScript file calling a Fibonacci function and that has been calling itself all the time and as you can guess it is a recursive implementation of Fibonacci and we can see how often each of these functions were called and that's really cool because for now by just adding one flag to the way we run our Node.js binary application we get all of this profiling data. Let's look at another example because we actually have deployed a Java application to this cluster as well. Instead of looking at one profile on its own let's merge all of the profiles we got in the last hour and this is really cool because now we can see our Java application as well and we haven't really done much to that except a few flags when starting this application and we can drill down into our Java application and see what that is up to. As the last part of the demo I want to show you how you can use the pull-based mechanism to instrument a Go binary on its own. For that reason I have started Parker the binary outside of Minicube and it's instrumenting itself. If we go to select profile we can now see that we have a couple of more profile types available and first let's take a look at the in use bytes memory. Looking at this we can see where our heap memory is allocated and currently used. Right now we can see that the in memory database we are using has allocated the most memory on the heap. Next to that we can take a look at the memory allocated bytes total that helps us which functions are allocating most of the memory. In this case we can see that the flat profile from pre-proof actually like this map sample is allocating lots of memory and that is completely fine because that is how we actually ingest the profiles into Parker. The last profile type I want to take a look at is the GoRoutine created total and we can take a look at the GoRoutines used by the Go program and as you can see we are creating a bunch of GoRoutines for the HTTP server in this case. We can also merge these together in the last hour or in this case since Parker has been started and we can see that for example in here we can see that the run group function has spawned a bunch of GoRoutines which is totally fine. The last thing I want to take a look at with you is comparing these profiles so we can take a look at this point in time and then compare it to this point in time and we can see down here in red where these GoRoutines have been created the most and in comparison you can see that our HTTP server created a bunch of GoRoutines. Now that we have seen Parker in action let's talk about the storage. So Parker was actually another project first called KONPROF and KONPROF was a direct fork of Prometheus that we modified to be able to store profiles but that came at a certain cost and we needed something better. So instead we kind of were inspired by Prometheus time series database but have rewritten it since so it's really optimized for the profiling, continuous profiling use case and what we actually done was to before we stored a profile just as a zip file basically in the Prometheus TSDB in the time series database where it's now we disassembled everything we optimized how we can store these things and then we do that in a more optimized way. So that's what we've done with Parker and that's why it also was rebranded as Parker along among a few other things. Parker storage is still inspired by Prometheus. It separates the metadata when ingesting into the storage. There was a key difference to before and now it also handles stack traces natively in the storage. We do that by using different chunk encodings so the XOR is used for the values just like Prometheus and double data is used for the timestamps just as Prometheus but we also use run length encoding for values that basically always say the same so if you like always profile for 10 seconds we can always just store the 10 seconds once and then we just increase the counter if we see that value again and we also use something called value sparseness so we don't see a stack trace we don't store anything for it and we kind of pretend that it's zero. The go profiles track actually looks like this we've seen a visualization of that earlier with the different data types but this is really what it looks like in go and as you can see we have the samples over here and then the locations and functions and mappings all as slices or what you would call another languages arrays. So when ingesting these profiles in go what we do is we need to create these meta store entries and kind of like take away the metadata right and as you can see the location has the ID the address and then that has a line or multiple lines and then each line can have a function and the function names are often repeated so we can kind of like walk all of this store it into a key value store or previously it was a SQLite database and then we can from there just use the values that you can see up here and pretend that these are basically time series just like in from ethios. Once we have extracted the metadata from the profile we are left with the stack trace IDs as UIDs and the location slice as well as the values and that is what we are working with onwards. Just to give you a quick overview of the architecture the profile is ingested up here it is then parsed and validated with the go library of Ppro we then convert the profile and uniquely store these mapping locations and function as said earlier and then what is left are these like time series values basically that we then can ingest into this highly customized time series database for profiles. Even after we have extracted the metadata of the function names and so on we are left with some other metadata such as the time stamp duration and periods so these are all numbers essentially and we can utilize double data chunk encoding run length encoding on these various types and because time stamps are always increasing by a certain amount the double data encoding is really good for this and then for the durations and periods they are really staying the same most of the time run length encoding is what we use. Just a quick note we only store the time stamp in here and then when retrieving a profile we look for the time stamp and then the offset in the time series is what is used to then fetch all the other data. So how does it look when querying a profile from the storage? So in this case we want to see the heap profile of Parker of the instant local host on port 7070 and we want to be very specific about the time so we give it the UNIX time stamp. This query is then transformed or actually made as a gfac request that looks like this and it's defined in a profile. At last I want to quickly explain how comparing or diffing profiles work and you can see two profiles over time with two samples in each profile and the first sample first has a value of 253 and then it has a value of 257 and if we subtract that we get back a value of minus 4 for this first sample and then the other sample in this case has a value of 26 at first and then a value of 24 and if we subtract that we get the value of 2 so that is how we can compare different profiles and then make these like red or green visualizations. The other synthetic profile is a merge profile and these profiles are created by summing up all the values of all the matching samples. So in this case the first sample has these two values and if we sum them up we get 510 and then the other one has 26 24 and we get the sample value of 50 so that is how we can use many profiles across a lot of time and then really sum it all up and you get kind of like the whole view of all the profiles over an hour or even a day for example. All right now you might be asking what is Farkas Rotenapp? We want to get persistent on this we are fully in memory and that is because we are also experimenting with different storage formats such as a columnar store and experimenting with a patchy arrow and we want to nail all of that before even writing to disk any of the profiles ever. We want to query be able to query only parts of stack traces for example. We want to improve the language and runtime support especially in the Parker agent via eBPF and we want to add additional profile types such as heap and allocations and IO such as like networking and like disk utilization. If you are interested in that and we like especially are like experimenting with columnar store right now we invite you to jump on our Parker Discord or attend the Parker office hours where we are happy to discuss pull requests or ideas or feature requests or anything really and we will also give like updates to the community. So hopefully if any of this sounds interesting to you you can attend and we'll see you there. Thank you for listening again I'm Matthias Läuwe and feel free to reach out but I'm also here for the Q&A now. Thank you and bye bye. So thanks a lot for the awesome talk. It was really really interesting to watch and to listen to. So welcome to the live Q&A session. We have a couple of questions in the chat. Few of them have already been discussed but I guess it makes sense to repeat them here so that people watching it later in the video will also see what went on. The first one here is if an application that's being profiled is running somewhere where Parker cannot scrape it is there any way to push data to the Parker server? Yes there is and first of all thank you for having me. Pleasure being at FOSM. I would love to be in Belgium with all of you. Always was a great experience but soon again but this is like turning out really good as well for sure. So to the question yes if you can't scrape or push or anything there is a project called like on GitHub it's called it's in the package organization slash profile and that is essentially a way of profiling with Pprof where you can turn on the profiling can turn off the profiling in Go with the Go programming language and then the resulting thing could be sent off to a remote endpoint with gRPC for example and that was something I actually hacked on kind of like making serverless a thing. So like imagine a request coming in and then you can profile the request and the once that request is done you can send it off to a remote service to to to store the profile there if the amount of requests you get aren't as high and that would also mean that if that's exactly the thing like it's a serverless function you actually gain a lot of value from profiling and improving that function if you can reduce the amount of CPU and memory or in the end like the time spent running the function right. So that's like one of the ideas that we were playing with yeah we we need to pick it up but there there certainly are always to do this yeah. Nice thank you then we have one more from Brian asking that we saw a presentation from Pyroscope earlier here in this step room and if you could elaborate a bit on how the projects compare. Right yeah it was a great presentation I wish it too. I think like overall it's it's really as mentioned in that talk as well it's doing kind of the same things slightly different and we really try to make sure that we're building on top of Pprof on our side as we thought that's kind of already an open standard open format for for representing profiles and and that's like everything we do kind of is based around Pprof we create Pprof profiles with eBPF for example as well we can scrape Pprof profiles from go endpoints there are some some clients for us in and other languages that create Pprof as well but as I said like we are kind of really focusing on generating Pprof profiles with eBPF and then I think that's like the biggest differentiation I mean the the storage is different as well and there are different tradeoffs to be made as as mentioned in the talk quickly as well we're also looking at moving from a Prometheus time series database to a column NESTor we have actually a an RFC like a request for comments document out in the in the open which people can read through and I'm happy to link this later as well where we discuss like the different tradeoffs on like time series database with column NESTor so I think that there are many different aspects but I mean same same overall ideas and I think it's it's really great to see the ecosystem moving forward with this and kind of expanding on the three pillars of observability. Cool thanks a lot so maybe I can sneak in a question myself. A bit of a beginners question but I'm about the visual representation as ICIC graphs right so what I'm always wondering if these are actual like stack traces or if this is kind of some merge so if you for example have a recursive algorithm then you maybe when you take snapshots get different stacks all the time because you have different depth of recursion and so forth but like overall it's kind of repeating the same functions is there anything aggregated or would you just see like a sequence of different stacks next to each other? So yeah it's definitely aggregated and that comes by the nature of it being sampled profiling so basically something like this tracing profiling where you would like actually record every single stack trace that ever happens you would record that but that's like the overhead of doing that is super high whereas like sampled profiling you just just do it a hundred times per second right computers are so fast you just do it a hundred times per second and then that's what makes up the the actual profile in the end so yeah like they are somewhat aggregated already but it's kind of like if you're doing it a hundred times per second then at least the stack traces that show up the most will definitely be in there and that's the ones we are interested in right so I don't know if that answers like your your entire question. No yeah so basically the sampling is through the aggregation is through the sampling itself. Yeah exactly yeah so like we like and I think that's true as I said for most implementations like you could do trace profiling but that's super expensive so you end up with like a sample profile. Yeah cool kind of do you have any more questions? Yeah thank you for the presentation I really like the merge view where you can get like a more of an average understanding of how the system behaves any thoughts about being able to use that data to identify outliners so in my case I run a lot of Hoverfront Hoverfront servers many many many many and I would maybe I'm not sure but I think it would be interesting to see if there's some server misbehaving so if I know the area I'm curious about I could instrument that using metrics but if it's if I don't know exactly what's wrong then it's much trickier. Yeah that's a really good question or like idea around how to do profiling and like certainly that's one of the aspects why we kind of like went with the profiling with the Prometheus service discovery and labeling we really want to make sure that like our kind of like goal in the end is you run a ebpf agent on every node and they send off these profiles to to parka and then in the end you can by merging everything you can see like the outliers or the the biggest kind of resource hawks that that
Continuous profiling is a widely used practice at Google but has only recently started gaining popularity in the Observability space, however, resources on this topic are still rare compared to other observability signals especially on open source projects. This talk intends to educate the wider community about the possibilities of continuous profiling, and give a glimpse into open-source tooling allowing everyone to join in on the practice and enabling everyone to build better software. For years Google has consistently been able to cut down multiple percentage points in their fleet-wide resource usage every quarter, using techniques described in their “Google-Wide Profiling” paper. Ad-hoc profiling has long been part of the developer’s toolbox to analyze CPU and memory usage of a running process, however, through continuous profiling, the systematic collection of profiles, entirely new workflows suddenly become possible. Matthias will start this talk with an introduction to profiling with Go and demonstrate via Parca - an open-source continuous profiling project - how continuous profiling allows for an unprecedented fleet-wide understanding of code at production runtime. Attendees will learn how to continuously profile code to help guide building robust, reliable, and performant software and reduce cloud spend systematically in various languages.
10.5446/56939 (DOI)
you Good day, good morning, good afternoon, where are you? My name is Bram Voila in the next 30 minutes or so. We'll be discussing how to bootstrap an observability stack in multiple disease. First let me introduce myself. I'm Bram Voila, I went to university, become a Molach biologist. Along the line, over time, I, as my colleagues used to say, I still slowly down to the doctor. I'm a biopharmatician and in an academic world. I'm the programming guy, you're also the server guy, so I picked up up skills along the way. And for the last decade or so, I've been in the DevOps as a respace and I'm now employed as a cloud engineer at a Dutch consultancy company called the Factory where we guide people into the cloud. So let's first introduce the subject of today, the observability stack. What is it? Observability is a new found way to describe what we used to call monitoring and it has three main components. It is all about metrics. So can we ask the system or can the system tell us what it is, how much it's doing of something. So how many requests is it processing, which is something we then ask the logs, like what is going on. And eventually, we can go into tracing where we can look at the system, spending how much time it would component over a system. So if we are in this fairly complicated infrastructure or architecture like microservices, then you don't always know where in the ecosystem of your platform you're spending time. So that's why you want to have tracing in place so we can start looking at what's going on. So what makes observability more than monitoring and that's why you basically take the luck out of your system. So we then not only obtain these metrics, these logs, these traces, but we try and use them in such a way that the system becomes predictable or at least we have a lot of confidence in what's going on and confidence that we can observe what is going on and therefore act on the individual components that may or may not need attention or care. So if we start zooming in on the individual components of our observability stack, we'll first like to start with metrics and that's basically in the historical path. If we look at that, that's where it started. Prometheus is the one I like. That's where we'll be zooming in on today is originally created by the company called SoundCloud and it quickly became the new normal, the golden standard for having and collecting metrics. It changed the paradigm that we used to use where things like graphite used to send them to graphite and graphite should somehow cope with the metrics or something. Whereas Prometheus runs the other way around where it basically you tell Prometheus what endpoints it needs to scrape. It does so and it stores it in its local time series database. What does that look like as a configuration example here? We say I want to scrape all my endpoints every 10 seconds and I'll have a job unimaginably called node and I have a static list of endpoints I want to scrape. In this case, we'll be looking at all our components in our observability stack. Our Prometheus, our Loki, our Temposa. Now, how do I use the metrics that Prometheus collects? For that, they have created a very nice and powerful query language called PromQL and there's a couple of examples here on the screen. It uses the notion of labels to allow the user to find greenly look into the data. In this case, I want to have all the HTTP requests for my scrape job called Nginx and I want to look for a handler that points at Grafana. The query language takes a bit of getting used to so the link on the bottom points you to how to get started. Same for Loki, Loki came later and Loki is advertised as Prometheus for logs. It came out of Grafana Labs and it shares a lot of code actually with Prometheus and it's alternative long term storage provider cortex. What does that configuration look like? Loki is a push system so we need two components. We need to have Loki installed basic configuration on the left. We need to expose it on a port. We need to have a place where we locally store this data. This is an example job. This is replication factor one. If it goes wrong, we'll get back to that later. We're already going to connect it to something called the alert manager which we're going to review later. That will bring up Loki on the one end and then on the other end on every machine that we want to send logs from. We need something called Promtail. Other tools are available but Promtail is made by the same people that create Loki and are released at the same time. We need to have a client to send it to. In this case, our local Loki in our data center. We need to set up a scrape config where we define what local log files we want to scrape and what labels we want to apply. Why do we want labels? Loki also has its own query language. With LOKQL it's very familiar to Promtail and it can also look in the log files based on these labels we apply. In this case, we're looking for job code in mySQL and we're looking for anything that is an error. In the second example, we're looking for an error but that can't be a timeout. If you want to know more about LOKQL, the link is in the bottom. It brings us to a third tool, tracing. For that, there are lovely people at Grafana created Tampo and it actually builds on the granddaddies in the tracing space. We'll have Zipkin and other tools are available. Basically, what Grafana, that's the one I was looking for. Those are the granddaddies but they are quite complicated to set up. What Tampo did is written in Go, so you have a single binary that we can install. We don't need difficult systems like Asandora or the other tools are relying on. They expose the API, the Yeager and Zipkin API for us to use so that we can have our SDKs. Tracing is interesting so you build that into your application. It's not something that you run next to your application. It's something you really need to build into. By exposing the Yeager and Zipkin endpoints, we can reuse the big ecosystem of SDKs that expose these protocols already. That is a very clever move. It brings us to our fourth component. We have our metrics, our logs, our tracing in place. We need to have some way of observing what's going on. For that, I want to introduce Grafana. Grafana is actually the oldest tool out of the four I've introduced so far. It actually started its life as a front-end for Graphite, which I introduced before. It slowly gained more capabilities and is now fully plugin-based. You can basically use it for pretty much any back-end you can think of. Of course, it's all about dashboard. That's what Grafana does. We can visualize the data in graphs, in gauges. We can correlate things. It's a very powerful tool. You can click together. You can also stand on the shoulders of giants. The lovely people at Grafana provide a registry of dashboards where people can upload their own examples. You can very easily use those examples again. For those people who like to do everything by code, they also release the tool called Grizzly, where we can download and upload the JSON representation of these dashboards very quickly from and to Grafana instances. That will not stuck with something that is artisanally put together. We have no way of reproducing or when we changed it. It's not what we want. It's hard to put back. For that, we can use the tool called Grizzly. Grafana uses SSF data sources. We can also put those in as code. Here's an example where I configured Loki. I just pointed at the URL. As an example, we'll do the yellow way. There's no basic authentication. The thing I want to point out here is you can also connect data sources. In this case, we connect our Loki data source to our tempo data source. We do this by adding a little stanza at the bottom called the right fields. While we link any trace ID, the word trace ID in whatever way we spelled it, we link it to the trace ID in tempo. That's what that looks like. In Grafana, we can do things side by side. Here's an example where on the left we have Loki where perhaps it's a bit too small, but we can see there's a trace ID where we have the actual value with a little button. That says tempo. What that actually does is it opens the screen on the right. Where we find in tempo the actual trace ID. We can link the log message on the left with the traces on the right. We can do side by side exploratory research. The last component is, of course, the alert manager. I say, of course, the alert manager is something that came out of the Prometheus project. We want to use alerts because that's the way we get told about something going wrong. Potentially, this isn't how it was before something goes wrong, so that we have enough time to actually fix whatever will go wrong. To me, it's an important component to the observability stack. Observing is fine, but we need something to act. We'll be able to act, I should say. What that looks like are alerts in this particular stack. Alerts are also just queries. In this case, we're looking for any scrape job that has recently failed. If we find one, we send an alert. The beauty of alert manager is that a little alert manager does the actual sending of stuff. We can send it to Slack, we can send it to any of the pager duty type providers. Again, we can stand on the shoulder of giants. We can reuse the lovely people at Awesome Prometheus Alerts. It's an open GitHub repository where there's a large collection of pre-existing alerts that we can simply reuse. Now we have a full observability stack, but why would we have one in a multiple DC? It brings me to the beginning of the story. The beginning is somewhere in the middle, that's interesting. The beginning of the story is last year. The lovely people at OVH Cloud had a big disaster. One of their data centers burned down. Not just one data center burned down, the entire campus burned down. Luckily, they had more than one campus. The day after, my ISO guy came to me and said, how are we protected from us losing an entire campus? That made me think, for applications, most architects now will think about multi-DC or even multi-cloud. Make sure that we have an architecture that survives failure. My observability stack is most of the time just in one DC. I spent some time trying to work out what would be required if we build an observability stack that was equally robust as our applications that we are deploying. The second half of this presentation, I want to go into how we did that. There are a couple of ways we could have gone, so we are going to introduce it similar to this. We have one, how do we make it multiple? The reason I don't want a stretched one, I want multiples. If we have a failure domain, the failure domain is our campus. When we lose a campus, we shouldn't be affected in any other campus we have. We should fail over to that campus or other DC or region or whatever we want to call it. It should happily run, continue running there. We don't want something stretched and then half of our application, our platform goes away and hopefully we will have enough capacity to keep running there. We want two independent DCs just the way we create our application platform. Let's start introducing the last component I need for this. I want to introduce Hashycorps Console, which is an open source service discovery tool where services can announce themselves into this tool. It also has a built-in key value store. One of its limitations is the key value store is only available in your own DC. There is no real replication between the two data centers. In the meantime, it has also gained service mesh capability. We can actually apply ACLs as well to the services that we are announcing into our service discovery tool. In this case, I am creating a local service for our Grafana instance. That listens at our local host 3000. That is where we check it. It also listens at the public IP. We call it Grafana and we apply the tag metrics. Why? I am also adding in a tag. If we have posted this to our local console cluster, we do it to our own node and then it gets replicated using a Roth protocol. If that has been posted, we can actually use this console server as a DNS server, which is actually awesome. DNS is a solved problem that is something that we have already. We can also add our console into our local bind or unbounds. We can query this Grafana that serves this console like any other DNS that we have made. But it is dynamic. The downside to this again is that the console is very data-centred aware. It will stay in your local data center. What they come up with is console-prepared queries. We need to add another stanza. For a service Grafana that says, in case my local DC fails over to our second data center, in this case very unimaginatively called DC2. We can post it to our local console, gets replicated. Now we can use the following DNS query to get to our Grafana service. It is no longer grafana.service.console, but grafana.query.console. It allows us to actually have a multi-DC failover. Now that we have console in place, we set them up for all the services we discussed before. Our Grafana, our metrics, our logs, our traces, and our load manager. Let's go down our observability stack and make sure that we can survive this campus failure, this DC failure. Grafana itself is actually fairly simple to make HA, well, the engine X in front of Grafana is simple to make HA. We can use a VIP IP address that we have, or if you're in the cloud, you can use some geographical setup so that you can always end up in the data center that is nearby. If you have your own stuff, simple VIP using keep alive these, fairly simple to set up. In this case, we'll have a server name that listens to two IP addresses and we'll proxy pass it to the Grafana prepared query. Grafana needs a couple of more things. Grafana uses a database to store stuff in. It's information, it's configuration. As standard, it uses SQLite, which is on disk file, a file based database, so that doesn't scale. So we need to get our information in the other data center as well. The other two options are Postgres and MySQL. Postgres does not have master master replication in its open source variant. So I'm going to introduce MySQL where master master replication has been around for a very long time. It's fairly simple to set up. Basically, we have two masters that use each other as a slave. And then so we keep each other in sync. So if we lose one, the others keeps running. If we bring the first one back up, you just have to sync again from the healthy master that is still running. The data sources need to be adapted as well. So we no longer point them directly into our local running here, Loki, but we point them to the prepared query. So if your local one goes away, it automatically goes to the other side. And we'll start querying the data that's available there. So now we have Grafana in place. We have an engine X that will failover. We have Grafana that failsover in its multiple ways that could have gone wrong. So the next one up is Prometheus. And actually Prometheus is quite cool. It has built-in console service discovery capabilities. So instead of having a long list of static targets that we have to keep adding, removing, we can simply let console automatically discover it. In this case, my very long list of static configs from the left now becomes fairly simple. We still script every 10 seconds. But now we'll just look at a local console server in our local data center. And any servers that exposed with the tag metrics, we're going to scrape. Eventually, we'll do the same for data center two. And in data center two, we'll do the refers. Making Loki, HA is actually quite simple. We'll build one in an IDA data center and we'll adjust our promptail configuration by just not sending it to one. Loki endpoint, but we're sending it to two. And then we'll have the data into data centers. And if Grafana fails over, well, if the Grafana data source endpoint fails over, we'll just have it all on the other side. Then tempo. That was actually the one that was the hardest for me. So how do we make that HA? And do we send it twice from the SDK? Turns out there's actually zero SDKs that have this capability built in. Then do we write it twice? Also known option because tempo uses, reuses code that you, that's from Cortex and Loki, and it can use console, but it uses the key value store. And as we discussed, that's only in your local data center. So that was not an option. So can we read it twice then? Also known option because it uses the same code. And then I was starting to think, should we add a proxy component actually had a half working example ready? And then Grafana agent zero dot 14 came out and they really released a remote write option, which actually saved today for me because that was exactly what I was looking for. I can simply update my, well, no, I'm just reaching a new component. So instead of writing directly into tempo, we now start writing into Grafana agent. And in this case, we'll still have the second receiver, but we'll write to what would don't store it locally will will basically proxy directly to the locus. So we add two locus here. And in a blob, please be aware by the time of this petition because I wonder three is probably out and then the port will change. So and then we need to change the tempo configuration, which becomes now really simple before it was quite long. And now it's simply will have the open metrics open telemetry ports open. Please also adjust your firewall. That's a couple of hours of my life will never get back. So now we'll just listening at the gRPC port, which Grafana is sending directly to on both sides. Now, the last component we still need to fix is alert manager, which is actually also fairly simple to to fix. Right on the system default, we need to add or whatever way you're running alert manager, we need to add two more parameters that are turning your single alert manager into a cluster. And basically, you need to advertise it on a new port where all the interaction between all the members of your alert manager cluster happens. And on the other hand, we'll basically configure Prometheus and our locus to write simply right to two alert managers. And then the alert manager ring amongst himself decoupled do things like decoupling. So it's actually quite cool. And that brings us hopefully to the end why I've proven that we now can have an observability stack in multiple DCs that can survive failure of an entire DC or campus. The picture below is isn't is not complete, but it's the components that we've added to make it highly available. So that brings us to the end of this conversation today. So if you want to discuss more with me, you can email me if you want to rent online, you can tag me in Twitter. And by the time this presentation goes out, the slide that should also be uploaded to to Slideshare. And if you want to play with this more, there's a figure in the example that I have that you can play with yourself and try and destroy and see if it survives. So thank you. I'll be around in the chat channels for a while. Thank you for listening. I hope you find it interesting. I think we're about to go live. I just want to take the questions from the chat yourself for you and up to you how we do this. That's not too many questions. I think that might be easier. Okay. Question number one is what is my experience or and or thoughts with telegraph instead of me to use exporters. I can be very brief. I have no experience using telegraph as a permit is it's working. There's more to do with telegraph normally ending up in influx DB, which I don't like the fact that I need to pay for the enterprise setup, aka multi PC. I think that was the only question so far. I think something that might be worth mentioning on that topic is that if there's no my sequel or no permissions exporter and then using the telegraph instead, but just exporting from each of these metrics works good enough. Because I don't think there's a complete overlap because they can expose from each is like battle metrics. That's something to look into for me then. I think I did something with SQL server for example, a long time ago, which didn't have on my permit use exporter. So let's see, there's someone typing at least. So maybe that's a question we can hope for a question. No. Well, in the meantime, we can perhaps finish our discussion we had in the closed section. Yeah, you were saying. So, so one thing that's tricky with this setup is the master master replication for Grafana servers. If you have a small set of data that master master across data center works fine and Grafana can be one of those. It depends on how much data you have there in them. Although we have some experience running master master where where migrations kind of grinds the master SQL servers to hold because it needs to replicate fully because before it can make any decisions. So that can be quite tricky if you do multi data center, but it also it depends a little bit on how much data you want to move between the data centers and especially one breaks it. I should be fine, but it's something definitely something you want to test for your environment. So, so make sure that it works as expected. Yeah, that is true. But in, you know, my limited setups, they're the final databases haven't been that big. Yeah, Grafana has not really been designed for this. It's something I've been thinking more about recently, but it's not quite there yet internally Acrofana labs. We have a lot of different data centers or regions or what we want to call them. And we deploy the same dashboard to all regions, but we have them as code as just a sonnet files. So if we make any changes, they get propagated to all different regions and we can access each region and the Grafana install there. So they are all independent. But that means you have to manage everything as configure rather than through UI. Yeah, I briefly touched that in my presentation as well. So I mentioned grizzly, I think, and no queue is coming. Some point in the future. But there's more stored in the database, of course, than just that's worse. Yeah. So the dashboard, yeah, I can easily replicate probably should replicate because it does work as code better than something, something to copies. Yeah. So let's see, there's a one. Yeah, there's some has a question about multi master application. Yeah. That's a long story. So if one master fails the other masters to serve the update and insert. I think the gist of the question is or the remark is multi master is complicated and the answer to this. Yes. But so is running stuff into the data centers and making sure it stays up into data centers. We could have gone master slave, but then at some point you're always talking to the other data center and you'll lose a bit of time for it to fail over hopefully fail over. Whereas in this mechanism, you should be able to keep running with minimum downtime. Yes, fixing a master master is difficult. And I screwed it up quite a few times. Only luckily, we've only one thing in production. There's someone still typing so we can wait for that. My question to you is when is Q coming? Oh, I can't say anything about that. It's not my project. I don't want to make an improvement. Yes. What's the main benefits? What is the problem you want to see is all kind of. Yeah, so configuration is code. I've been using a fun net, which seems to sort of stalled at the moment. So the big buzz at Grafana is we're going to queue, but I haven't seen any. Real examples yet of how my dashboards would be built. Yeah, I think there's going to be more on that soon. But it's definitely a well-spoken internal topic. For me, the workflow now is to have to find us build in one, pull it with Gressly and then push it out because Jason, that's still. Yeah, I can get that. Cool. I guess we can end this a little bit earlier if there's no other questions. I hope to see you next year in Brussels instead. Yes, for sure. I hope this is definitely the last online one thing. I think it's kind of cool with the video and chat. And I think this is the by far best online conference experience, even compared to the big companies doing online conferences. But I still miss Fosten. Yes, I must say that the system frags very well. Okay, so yeah, Chris Bajort linked to something at Inuits. I'm going to check that out. Super curious about that. Cool. I guess we'll see each other next year, hopefully. Yes, for sure. Thank you. See ya. Bye-bye.
A gentle introduction to Observability and how to setup a highly available monitoring platform across multiple datacenters. During this talk we will investigate how we can setup and monitor an monitoring setup across 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
10.5446/56940 (DOI)
Hi everyone, thanks a lot for being here again at Fosden. It's a great pleasure to be present even online on a second time. And today I'm going to talk, me and my colleague, Jean, we are going to talk about backup and restore tools and do a comparison of performance among them. Doing our tests we found some really interesting results and I think it was a nice stop to revisit now in 2022 because for example my SQL shell introduced new tools to do backup and restore and I think it would be nice to do a comparison with what we had in the past like my SQL dump until nowadays. So first for those who doesn't know me, I'm Vinny, Vinicius, feel free to call me any way you want. I've been working in support team at Percona for almost five years. I work specifically with my SQL and MongoDB. Recently me and my colleagues again, we released a book about my SQL, learning my SQL for those who are starting your career on this MySQL open source word. And Jean, would you like to introduce yourself? Thank you. Hello everyone. My name is Jean, I'm from Brazil as well. I've joined Percona in 2021 in the middle of the pandemic. So I'm joined the team since then and helping Percona with my SQL and MongoDB tasks. And here today I hope you guys like this, our presentation. Let's move on. Thanks Jean. So for us to start, let's begin with what tools do we have available to perform the backup and restore procedure. The tools that we chose are MySQL dump, which is the class key one. It comes with all MySQL binaries. It is relatively well known by everyone, I bet that everyone that is watching this, so used MySQL dump at least once in their life. And the other tools are MyLoader and MyDumper. It's part of the same process. First you use MyDumper and then MyLoader. MySQL pump that is present on the MySQL binaries, on community version, Percona as well. Extra backup, which is a physical backup. It's open source tool provided by Percona. And MySQL shell. MySQL shell is present since MySQL 5.7 and 8 and introduced the new tools to execute these tasks. One clarification, I mentioned physical backups. So quickly remembering, reminding this, there are two types of backups. Physical backups and logical backups. Physical backups, they literally copy the files that belong to the database. It's like how extra backup works, copying the files. If you shut down MySQL and do a Cp, scp or something like this, it's a code backup as well. It works as a physical backup. Although as we can see, you can't copy the files while MySQL is running, because then you will lose that and tag it. For this we have extra backup. Then logical backup, you don't take copy of the files. Basically the tool will connect to the database and perform SQL operations to extract the data and save it into one or multiple files. So we won't have any specific files from the database like the redo logs. A few things that we need to consider when taking a backup. These are based on our experience. I wrote a blog post about backup performance, which I got a few comments. And I think it's worth to remember this. So many things around network encryption, file encryption. If we are going to use a single thread or a multi-threaded tool, if we are going to compress this backup, because disk is expensive, especially for people that are on the cloud, which each gigabyte has a cost. Backup time and restore time, those are important for not only the DBAs, but for managers to measure how long the operation can support to be offline because of disaster recovery situation. The total time, of course, we will go over the next sections. We will see that a lot of the tools perform the backup relatively fast, but the restore suffers a bit. So yeah, we'll talk about this. Of course, we talk about total time to backup and restore, point-in-time recovery, which is a very important feature for most of the companies because they can't afford losing more than 30 minutes of data, one hour, whatever, and performing full backups is an expensive operation. So usually people decide to do one full during each week and then incremental backups. And a few others that I couldn't think of, but I'm sure you guys might have any other items. So let's begin with the numbers that people are most interested in. And let's start with the backup benchmark, which is what I like to call the first side of the history because it's where everything is born, where we literally start. So what I used on this benchmark, I used the IWS instance, M5DN 8X large with 128GB of memory, two Iowan disks with 5000 IOPS fixed IOPS because I didn't want those burstable disks interfering into the results. So I provisioned a fixed storage with IOPS, sent to 7.9, and as you can see, the versions of the tools. MySQL 8026, same for MySQL Shell. This at the time of this recording, my damper, both GZIP and ZSTD, they are on 11.5, 011.5 version, and extra backup 8026. We can notice extra backup is not working anymore, like if you have extra backup 8026 and MySQL 8027, extra backup won't work. That was a decision that Percona made recently. And how my data is composed. So we have 96GB in disk of data distributed into 90 tables with different sizes, many tables going from 3GB to a few megabytes only. And I created this using the TPCC script. It's available here and providing it, it's open source. And the idea is, of course, to have more data than my currently buffer pool, because that's what we observe for most customers. The buffer pool contains only the hot data, while the database is much larger than what we have configured. So let's start with the results. This is the backup size. As I mentioned here, the database is 96GB. So extra backup with no compression. It's a bit easier to understand why, because it's a copy of the files. It says this, except binary logs. So if you have 100GB of binary logs, this is not going to be copied. And if we follow, we can see extra backup using compression. My SQL pump with no compression and LZ4, my damper, my damper without compression. My SQL shell uses the STD by default. So it's already compressed. And my SQL dump not compressed, not compressed as is. This I tried to make it a bit different. This is the ratio, which means that the extra backup is equivalent to the database size. And for example, my SQL shell, as we can see, it's 90% of the total size of the database. So you can theorize if you have a database with 100GB, my SQL shell usually uses 90%. And you are going to have around 9 to 10GB of backup. So then you can explode to one, explode to one, that abides to whatever and gets some similar metrics. Now going to the runtime, this is time to execute. Less is better. So smaller the bars, more faster the backup run. As we can see, my SQL dump, the only single thread to running is huge. We can see the other ones running almost at the same time. My SQL shell, we can see my SQL shell and my damper with the STD are the tools that are running, are the fastest tools to perform the backup. And because the my SQL dump took so long, the graph is not so clear. So I took it out so you guys can have a better idea when we are talking about parallel tools. And well, the numbers are the same, this didn't change, but I think this shows with more clarity, like for example that extra backup took more time. And each bar is for 16, 32 and 64 threads. Remember that the machine has 32 cores. So these are the results. I got some questions and comments on the blog post that I wrote. Extra backup currently does not support the STD. You need to do a pipe on the backup and then compress the files manually. However, we opened a feature request. So hopefully we will soon see support to the STD in extra backup too. In terms of performance, during the test, I didn't observe any impact in performance in using socket or TCP IP protocol, both running to the same speed with one or two seconds of difference, which for me, this is the margin of error. And another question that I got, for example, my SQL shell uses SSL by default to connect to the database. Why am I done, per or not? So I got a question like how meaningful is the impact of SSL? And I run the tests, as we can see, SSL took four seconds more than no SSL, but I don't consider this as a big difference. I would say, especially when we see the total time, this is not a meaningful impact to the overall process. And now my colleague, Gian, will talk about the restore benchmark. Okay. Thank you, Vinny, and hello, everyone, again. So let's keep the pace and speaking about the restore results. Again, as you might know, a good backup policy also includes a restore routine. So it's interesting for you to have these numbers to evaluate which tool to use when planning a backup routine. Okay. Next, Vinny. So just a review on the restore machine. It had the same profile as the backup one. It's a 32 CPU machine with two NVMe disk with fixed throughput. And the tools that we used had the last version running. Okay. So let's go next. Okay. Let's talk about the restore itself. So as you can see, the numbers are in second. And MySQL dump, unfortunately, it gets most of the time from the others we've compared because it's not fair to say about that, but it's a single-threaded tool. So it's this kind of results, it's expected from our side. But the fun part is when we compare the MySQL shell and the MyLoader results, which is nice the numbers they got. So in the next slide, we are going to see it closer without the noisy of MySQL dump. Okay. So here we have the numbers and it's quite interesting because there are a few points which we would like to show for you guys. So the first one is the battle between MySQL shell and MyLoader is quite tough in terms of numbers. They are very, very close. So any of those tools that you pick to use, you'll be more personal choice. So if you look the numbers, let me see. We have around, let's see, we have two minutes. We have, yeah, we have seven minutes for MySQL shell and a little bit more for the MySQL Loader. So but the point here is also worth mentioning. It's about the MySQL bump. It shined when we did the backup because it provides the parallelism option. But then when we did the restore, the numbers goes a little bit high if you compare to the others. So that's because for the restore process, the MySQL bump, it's a single-threaded operation. So that's why, unfortunately, the numbers skyrocketed a little. And the last, but not the least, we have the extra backup running with impressive numbers. So as you can see, it took two minutes to finish the restore with no compressed data and seven minutes with compressed data sets. So it's a very good result for extra backup. So next, please. Okay. So the restore itself shows a little good numbers. So in the next slide, we'll have the total of the numbers counting backup and restore and that will help you to show, to move the decision. Next slide, please. Okay. Here, we have the backup and restore numbers summit. So as you can see, the MySQL dump, it got the highest number. So it took 27 hours to finish the process. So that's a lot of time to run a backup and restore process. But then we also have the information from MySQL shell and the other tools. On the next slide, you'll see it without the MySQL dump information. So great. So here, we can see that, again, the battle between MySQL shell and MySQL loader keeps happening. So the numbers are quite good in terms of how long did it take to finish the entire process. So again, both of those would be a good choice to use in a backup policy. But then when you check the MySQL, the MySQL pump, do it to the restore process. It unfortunately got a very, very high backup and restore process. Again, mainly because do it to its lack of feature in terms of parallelism for restoration process. And then we also have the extra backup, which again, from the other tools, if you compare, as Vinny said before, it's a physical backup. So the things are a bit different in terms of how it takes the backup and how it restores. And then it also provides a very good result against the others. So next, please. So what we can observe after all of this. As we saw, we can't look at only one side, only the backups or only the restore. There is always the two sides of the history plus your business budget, like how much space in GSC we will use. So you need to use compression, security compliance, you need to use encryption or network encryption or rest. So those are all the things. We can observe like extra backup seems to offer a good balance between backup and restore time. The two, of course, is not the fastest when doing the backup. We saw that my damper and my SQL shell are much faster, but overall, it has a good advantage, especially when using compression. Because as we can see here, the difference between a compressed extra backup and no compressed, it's only a few seconds of difference. My SQL pump, as Jean said, it is a great tool to take the backup, but it suffers because it sends everything to a single file and you need to import through MySQL. So hopefully in the future, I would like to see MySQL pumping splitting the data into different files, like my damper does. And then it should be like a very good tool to use it. Another thing is that, as I mentioned, we don't observe significant impact when using compression or SSL or TCP IP versus socket. Like when we put the entire process, there is not a big difference. And if you consider the amount of disks that you are saving, it might be a very good option for you. And lastly, like it's clear to see that parallelism, especially in these nowadays, which both CPU machines are available, parallelism is a must. We can't wait more than one day to do a backup and restore using MySQL. So it's not feasible nowadays. It is possible to squeeze more juice from this. For example, I didn't touch any MySQL settings, like we can avoid writing to the binary log, we can disable it, we can set the R&DB flush logs TRX commit to 2 or 0. We can disable the double write buffer and many other things. However, like this needs to be calculated by you guys, calculated the risk because we are going to do a trade-off between performance and reliability. So if during the process, when you disable everything, your database crashes, you will probably and I really recommend to restart the process again. This is a blog post written by Ives Trudeau, an architect from Percona, and he gave some advice on how to improve MySQL writes. On a side note, the first line is be aware, this can corrupt your data, so don't test it in production, go for your key way environment. And now the choice of the presenter. Jian, which tool you would like to use if the database would be in your hands? So it depends on the situation, of course, but generally speaking, I would choose extra backup. So first thing, some considerations first. So it's nice to see MySQL shell and MySQL loader with great numbers. It's great to have options to use throughout the database. But even for small tests or for complex ones like create a backup routine, for me personally, extra backup also helps. So I would go to extra backup at this moment. Okay, thank you. Yeah, I must say, I agree with Jian. I really like mainly the PTR option of extra backup, which is very flexible and the other tools does not have it. But because I used logical backups for a while, I got used with my damper, MySQL shell. I like to use MySQL shell only for MySQL wait. I know that Oracle guys may not agree with me, but that's my personal choice. So if I would be doing a logical backup, certainly I would use MySQL shell or my damper or my loader, but never MySQL pump or MySQL dump. As we can see, the difference is huge. And as Jian said about extra backup, his comments, and I know we are exceeding a bit of the time. So guys, if you have any questions, please feel free to ask in the chat or you can reach out to us in the social media or per corner forums and we would be glad to help. Once more from my side, thank you to be here at Fosden again. It's a huge pleasure. And Jeff. Yeah, for me as well. It's my first time. So I hope all you have enjoyed this presentation. So if you have any questions, feel free to chat us. Okay. Bye-bye. Okay. Thank you for your answer. Yeah, thank you.
Backup and restore methods are concepts that everyone knows the importance of. Over the years, open-source tools emerged like MyDumper, Xtrabackup, and Mariabackup. Also, with MySQL 8 new shell, new utils for dump and restore were introduced as well. In this presentation, we are going to compare the newest backup/restore methods with the most used ones. We will see how parallelization can influence the speed of backup and restore process and also how the compression algorithms can influence the performance. In this talk, we will compare mysqldump, mydumper/myloader, mysqlpump, MySQL Shell utils, and Xtrabackup.
10.5446/56941 (DOI)
Welcome to Efficient MySQL Performance in 40 Minutes by me, Daniel Nester, for FOSDOM online February 2022. Let's begin with a quick introduction. Again, my name is Daniel Nester. I have 17 years experience with MySQL. My personal website is hackmysql.com, where I've been posting blog posts and tools and other stuff since about 2005. But I'm usually known for my eight-year tenure at Percona, where I worked on Percona Toolkit. I'm currently a DBA at Block, formerly known as Square. And right now I'm working with the Cash App, helping to manage their databases. I'm also the author of Efficient MySQL Performance, recently published by O'Reilly. You can see the book cover there on the right. This is actually my first virtual conference and recorded presentation. So it's a little strange for me because I'm used to presenting in front of a live audience. So to help you put a face to the voice that you're hearing, this is me. Up top is a crazy beard that I grew recently, but it was a little too crazy. And so I trimmed it finally. And I usually look more like the photo at bottom, is how people would probably recognize me from the previous conferences that were live and in person. Likewise, I need an audience too. So I chose this picture to have an audience to talk to, because this is actually Faust in 2019. So I will pretend like I am talking to this audience. So objectives for this talk. This is actually the second time I've recorded this talk, because the first time I recorded it, the runtime came out to one hour, 25 minutes. And this is supposed to be a 50 minute slot with some time for Q&A. So that didn't work. That is because there is a lot of material to cover. This talk is based off my book, which is why it has the same title, Efficient My Secret Performance. The book has 10 chapters, but I knew that that wouldn't fit in the time slot. So I first whittled it down to chapters one through four and chapter 10. But even after recording that, that was an hour and 25 minutes. So I've had to whittle it down even more to just these three chapters. This presentation, like the book, is intended for engineers using MySQL, not database administrators or MySQL experts or engineers aspiring to be DBAs or MySQL experts. It provides a path through the complexity of MySQL as depicted here in this diagram. When you're new to MySQL, it can be very difficult to know where to start and where to go after that to achieve better performance, as shown there. Performance is like a shining star somewhat out of reach. Well, that's what this presentation does in super condensed form given the limited time. It begins to show you a path through that complexity, and that path begins with query response time. The path begins with this understanding that performance is query response time. Pretty much everything comes back to this point, especially for engineers using MySQL. Performance is query response time. But let's step back for a moment and ask a broader, kind of more general question, which is, what do we want from better performance? When engineers say that they want better performance with MySQL or MySQL is running slowly, or when they ask a DBA to do something to increase the performance of MySQL, what are we really getting at here? What do we want as a result? Oftentimes, there's a perception that we want more QPS, but QPS is not performance. It is one measurement of performance, specifically just throughput. But whether you have relatively high or relatively low QPS doesn't actually mean that MySQL is performing well. And I'll show in a later slide how much QPS can vary based off different applications. So just having high QPS doesn't mean that you have great performance. Sometimes it's another metric like CPU usage. Maybe the engineer wants MySQL to utilize more CPU cores. And I've actually had this case before, an application that was never really utilizing more than 50% CPU. And the engineer wanted more performance out of MySQL and they thought, well, if MySQL would just use that other 50% of the CPU, then it would be performing better. But that's not actually the case either. Because the contrary sometimes happens too. Sometimes we want less CPU. I've had this happen too. An application was causing MySQL to consistently use 90% or greater of CPU. It was really burning up the CPU. And in that case, the engineer wanted MySQL, wanted better performance, meaning that MySQL would use less CPU. So you can pick any metric and it's completely arbitrary. And that's because these individual metrics are not what we really want from better performance. At the end, they're just measurements out of performance. They aren't what performance actually is. So what do we really want? It's not those things. What we really want is to do more work in the same amount of time with the same hardware. You can think of MySQL like a machine in a factory that outputs something. That's its work. We want to have that machine stay what it is. But we want that machine to do more work, output more of whatever it makes, for example. Do more of that in the same amount of time each day, for example. So that leads to the question, well, if MySQL is the machine, then what does it do? MySQL really just executes queries. At the end of the day, you can just say that this is its one and only job. Of course, under the hood, it's actually doing a lot of other things like replication, for example. This is its purpose. The application puts in some data, stores it, and then maybe later it executes a query to fetch back that data. And that's really all MySQL does is execute queries. So what we're really saying here is this. When we want better performance, we really want MySQL to execute more queries in the same amount of time with the same hardware. That would be real nice. And if it could do that, then we would say, oh yeah, MySQL is performing a lot better. Okay, great. So how do we achieve that? Well, the typical approach is one, scaling up the hardware. So basically a bigger, faster machine than tuning MySQL, tuning the machine, making it more efficient. And then finally, optimizing queries, optimizing what we're putting into the machine. That is the common sequence that I see all the time. And it's understandable, perhaps, because from the application engineer's point of view of the person using MySQL, scaling up is relatively easy, especially if you're in the cloud, just click some buttons and now you have more CPU cores and more RAM. Tuning MySQL is also somewhat easy from their point of view because they ask the DBA to do that. Hey, DBA, please fix the MySQL configuration or do something to make it more efficient. And then finally, the most difficult is optimizing queries. And I understand that too because query optimization is not a trivial task. There's lots of things you have to learn and consider to do this. So by comparison, it's a lot easier. But honestly, that is the wrong approach. The correct approach is first optimizing queries, then scaling up hardware as a distant second, and then even more distant tuning MySQL. Why? Going from the bottom up here. Why is tuning MySQL last? Because MySQL is not a new database. It has been around for 20 years or more, and it has been tuned and optimized over all that time by world-class database experts. So internally, MySQL is already very tuned and highly efficient. Tuning MySQL these days is akin to the proverbial squeezing blood from a turnip or squeezing blood from a stone or spinning gold from straw, whatever you want to use. Some experts can do it, and there are cases where it needs to be done, but by and large, it is a distant, distant third. Likewise, scaling up is a distant second because hardware is usually pretty plentiful. It's pretty easy to get a lot of CPU cores, a lot of RAM, a lot of IOPS on the storage. I mean, there are cases, especially in the cloud where it can be drastically under-provisioned. But again, hardware is so cheap and plentiful these days that it's usually not the first thing you should jump to. It's a distant second. But number one, optimizing queries. As we just said, the primary work that MySQL does is optimizing queries. That is the input. That is what drives its work. So that has the most bang for the buck, so to speak, for getting better performance by optimizing the work that the application is asking MySQL to do. So let me move that up top. So really, the first thing you need to do for better performance is optimizing queries. And then I grade out two and three because they are important, but they're a distant second and a distant third. So with respect to optimizing queries, again, we come back to query response time as the primary metric, as the north star that guides all your efforts. Because for two reasons, well, query response time is the only metric we actually experience. And if you think about that for a moment, you realize that that is true and uniquely so because if a query takes seven seconds to execute, we actually experience those seven seconds of maybe seven seconds of impatience. And not just us as the engineers, but customers as well. So they experience query response time in the form of slow queries, or hopefully they experience it in the positive as, you know, very quick queries in a very snappy responsive application. So that's the first reason why query metric or query response time is the most important metric. And the second reason why query response time is that it is actionable. We can directly change and optimize queries to make query response time less to make the queries execute faster. And that is the clear goal here. You can do this with some other metrics, but it doesn't necessarily mean that response time is going to be better, or that it will have some sort of impact on us as engineers or customers. For example, you could make a query access less rows, but unless that affects response time, no one really experiences accessing a certain number of rows. And sometimes you have to the query if the query means to select 10,000 rows, then it has to select 10,000 rows so you can't really change that. But there's always things you can do to affect changes to query response time to reduce it. So that's why again, it is the focus here. So when we want better performance, we are going to focus on optimizing queries because that is the work that my SQL does. The north star of all this is query response time because it's what we experience and it's what we can change. So again, it comes back to this, this all important statement that performance is query response time. And again, I'm going really fast here because we don't have much time in this presentation, but I wanted to at least provide three more specific steps about what you can do to get started along this path. And I blogged about this recently. If you go to my website hackmysql.com, there is a blog post from January about how to enable query metrics and use a query metric tool. But briefly, the steps are first you have to enable query metrics. This depends on several factors. Again, I documented in my blog post. Then you need to use a tool to analyze those query metrics to find out which queries are the slowest. And then three, you begin to optimize those slow queries. If you do this, the result will be that my SQL performance increases. So this is the first step along the path. This is the most important thing to know. In fact, if you do nothing else but this, you might achieve much greater performance with my SQL for quite a long time and you might not need to make any other changes. That's how powerful query response time, query metrics, query analysis is. That's why again, it is the first step along this path. Next step, access patterns. Let me begin with this idea. My SQL performance is limited by the application, not the other way around. This tends to garner some debate and it's understandable for reasons that we'll examine in slide after next. But this is the main idea that I want to press upon with respect to access patterns and we'll build more on this idea here in a few slides. But first, I wanted to present these real numbers from real servers, all of which are roughly the same hardware, all of which have roughly, you know, the same performance with respect to response time. Like all these, these are real production applications that have acceptable response time. And so, in so far as we can, you know, control all the variables we have. And what this shows here is how much different access patterns or workloads from the application can affect my SQL. So on the left column, we have threads running, which is a general indication of how hard my SQL is working because one thread executes one query. And on the right column, we have the number, the QPS for those number of threads running. So at the top, we had an application that generally has four threads running and does about 8000 QPS. But then we have an application that doubles the amount of threads running, 8, but has slightly less QPS, 6000. Third row, same number of threads running, different application, eight threads running, but now a huge jump to 30,000 QPS. And then fourth row, a different application has typically 12 threads running and it's doing slightly less QPS, 23,000. And on the last row, another example of a different application typically running 15 threads, but has 33,000 QPS. Again, I'm not saying that QPS here is an indication of performance because that's not what performance is. Performance is query response time and all these applications have good response time. But what it shows is the huge difference in work and workload and other metrics that the applications access patterns can cause in my SQL, especially those highlighted two rows there with eight threads running really makes the picture clear. Whereas one application typically causing my SQL to keep eight threads active, that turns out to be only 6000 QPS. Whereas another application completely different, obviously, must have completely different access patterns, has the exact same number of threads, so basically the same load on my SQL, at least in terms of threads running, but significantly more QPS. So these are real world numbers. So it gives you an idea, a sense of just how much access patterns from the application affect my SQL. So let's now get into cause and effect and two perceptions and a reality. The first perception here is that my SQL is slow and that is causing the application to be slow. That arrow is a little thin there, but the arrow is pointing from my SQL to the application. This is a very common perception, especially when engineers start to want more performance from my SQL or my SQL is suddenly slow. They think that my SQL itself is being slow and that is therefore causing the application to be slow. But I can assure you that this is rarely the case. There are rare cases where my SQL might have a bug, for example, that causes some sort of slowness. There are cases where a gross misconfiguration of my SQL might cause my SQL to be slow, but in a professional environment, presuming that my SQL was configured by experts, that shouldn't be the case. This perception is reality very, very rarely. And when it is the case, it is usually some of the most interesting and difficult cases to figure out, because as I mentioned earlier, my SQL is more than 20 years old, so it has been fine tuned to a very high degree. This is usually not the case. Second perception is somewhat related and somewhat more common, but again, in the grand scheme of things, it is still very rare. The perception here is that something is causing my SQL to be slow, and therefore that causes the application to be slow. So in the diagram, the turtle is the thing external to my SQL, causing my SQL to be slow, which then therefore causes the application to be slow. Now, this definitely can't happen in my career. As a DBA, I have seen this many times, and it could be anything from DNS to the network to a degraded rate array to, I mean, heck, I've seen faulty CPUs, faulty memory, all sorts of external things cause my SQL to be slow and then cause the application to be slow. But again, we presume that you're running my SQL in a stable, professionally managed and built out environment such that these things shouldn't happen. There are the exceptions, not the norm, and especially kind of on a day to day basis, if there's no obvious fires, if there's no obvious turtles, so to speak, then this is not the reality, at least not the one that you as an application engineer using my SQL should jump to. And if it ever is the case, DBA should be on top of it and should be aware of this and should notify you ahead of time that, hey, there's this thing affecting our databases. But again, this is usually not the reality. The reality is this, almost always, it is the application that is accessing and using my SQL in some way, and that workload, the combination of queries, data and access patterns from the application to my SQL causes my SQL to be slow, and then slow response time back to the application. But the root cause here is the application, how the application uses my SQL. And really, it's as simple as that. I would say that this is probably the case in the reality, at least 80% of the time, if not more. So the point is, when you're using my SQL, and everything's fine, and then suddenly everything's not fine, and my SQL is slow, start with this reality. Start by asking, hey, what is the application doing, or perhaps not doing, what has changed about how the application is using my SQL, because that is the vast majority of the times, that is why the application is getting slow response from my SQL, because the application is doing something to my SQL that is causing it to have slow response time. And only rarely, the other perception one and two are the case, and when they are, hopefully your DBA will let you know about it ahead of time. So let's keep that reality there at the bottom. And now focus on access patterns. And the interesting thing about access patterns, to me at least, is that everyone seems to know them and talk about them. Pretty much every engineer I've ever dealt with is aware of access patterns and even talks about them. But interestingly, I have never seen a, an enumerated list of specific access patterns for my SQL, and I couldn't really find any for any other relational transactional database. I did find one for Amazon DynamoDB. They publish a list of access patterns specific to DynamoDB. But apart from that, I couldn't find it from my SQL. And so in the course of writing my book, I enumerated this list of nine access patterns. And so I'll briefly run through each of the nine to give you an idea of what they signify with respect to how the application is accessing data in my SQL. And a lot of them will be very familiar to you, I'm sure, but now they have a specific name and are specifically enumerated. So the first access pattern is probably the most basic and probably the most important, which is read write. Does the application primarily read data, primarily write data, or is it a combination of both? And that's important because scaling reads and writes are much different considerations. So write scaling writes is much more difficult. Likewise, throughput is closely related to there's a big difference between an application doing 100 reads per second versus 100,000 reads per second. Likewise with writes, if you're doing, you know, one write per second, you can almost forget everything. If you're going to try and do 100,000 writes per second. Okay, now that's a that's a significant challenge and entails all sorts of other considerations. Data age is difficult to explain in short terms. But basically it is a reflection of, or it's an indication of the how my SQL is going to handle the working set size. Data is accessed in my SQL. The lingo is that data is made young. So when it's access, it's new, it's young. And when data ceases to be accessed, it gets older and older until it is eventually evicted out of memory. And because a certain percentage of the data, generally the rule of thumb is 10 to 20% of your data is considered the working set the frequently accessed data. That is the data that my SQL tries to keep in memory. And data age as an access pattern has us look at and consider whether that is actually true. The application sticks to that working set, or does the application tend to dredge up old data so frequently that it's difficult for my SQL to keep the working set in memory. Data model means, does the access reflect a relational database, a transactional database, you know, row or row oriented database, or does it reflect something else like a document store or Q. As it turns out, applications use my SQL for a lot of purposes, not all of which actually are good fits for a relational data model. So that's why we need to look at it. Transaction isolation is also important because my SQL is a transactional storage engine, at least in ODB, in other storage engines. And so does the application actually need transaction isolation, and if so, which levels. If it doesn't, then maybe my SQL or any transactional data store is not the right choice. Read consistency means that does the application need read after write semantics, or is eventually consistent read good enough or acceptable. If you can do eventually consistent reads, then perhaps this data can be read from a cache. So another important consideration, especially if you have a high rate of reads, you know, the three put through reads is very high. And if those can be eventually consistent, then great, there might be an avenue for performance optimization by using a cache. Concurrency means is the same data accessed at the same time. Does the application typically do this for applications with high concurrency, meaning they're accessing the same rows at the same time. Then the other access pattern traits become really important, like is it a read, concurrent reads or concurrent writes concurrent reads are relatively easy to scale but concurrent writes. And that is the that's the tricky performance problem to scale, especially if you're talking a high throughput concurrent writes. That's one of the most challenging things. So we need to consider the concurrency of the data access. Row access means does it access single rows, ranges of rows, or random rows all over the place. So if you have a high throughput and high concurrency that also writes data all over the place, that is, that's a real tough access pattern. But other access patterns are a little bit different. So the most challenging thing to consider is the high throughput and high concurrency. And again, that means either point ranges of rows or random rows. And then the result set means kind of three things. Does the query group sort or limit the result set. And that's a consideration because grouping and sorting and limiting the results is a very important thing. And that means kind of three things. Does the query group sort or limit the result set. And that's a consideration because grouping and sorting and limiting have different performance optimizations and might even work differently on different data stores. So if you begin to think, oh, we can move this access or store this data somewhere else, but it groups or orders or limits the results that that might affect your considerations. So again, all these nine access patterns, when you begin to sit down and think about them in terms of your application, really begin to shed light on how the application uses my SQL and why or why not it is getting the performance out of my SQL that it does. It's related to these access patterns. As shown there at the bottom, the application accesses my SQL in some way, and those access patterns are really what determine what kind of performance you get out of my SQL in terms of query response time. So, in the book, I make this analogy by comparing by comparing a Toyota to a Ferrari, and here we have a Ferrari. It's an interesting analogy, I think, because both brands of cars have roughly the same parts. They have engines, transmissions, wheels, roughly all the same parts. But even looking at a car like this, the Ferrari, you know that it is so much more performant or high performance and efficient and faster and everything. But if you begin to think about, well, why is that if they have roughly the same parts? Why is this Ferrari such a faster machine? And the answer is, is because it is designed and engineered in every aspect to be faster and more efficient. The point is, is that if you want maximum performance out of my SQL, if you want to improve performance out of my SQL, then you have to engineer your application in terms of its access patterns specifically to be like a Ferrari. Last up in this short presentation is two ideas actually about my SQL in the cloud that I think are very important to know because running my SQL in the cloud is becoming much more popular and common. The first important idea, especially if you're an engineer just using my SQL in the cloud and trusting the cloud provider to operate it, is that cloud providers are not DBAs. They're, they often describe themselves as my SQL being fully managed, but there is a lot more toward that story. And so to give you just a brief idea, this is the condensed version of a database, list of database operations that I use to help teach new database engineers. If you want the full, very long list, it's on my website hackmysql.com under the engineer menu at the top database operations manual. But this is the short list. So we can just go down through it real quick here to show and to make you aware of the things that cloud providers are not going to do. So from top to bottom, of course, the cloud provisions and configures my SQL for you. That's part of the basic cost of everything. But then the next several things, my SQL user, server metrics, query metrics and OSC, which is online schema change, that's all up to you to do. And, you know, if you have just a few my SQL, maybe that's easy, but as you begin to have many, then that becomes a serious consideration. Failover is usually within the same region, but there are caveats here. The cloud providers do do this if one my SQL instance in the same region fails, they failover to another. And this is what they call high availability, but we'll get to that here in a moment. So they do provide that, but there are caveats. So read the fine print and test it on a reality. What they typically don't do, depending on the product, is disaster recovery, DR. So if you need to run my SQL in a completely different region, for example, if you're running on the US West Coast, you need to run on the US East Coast. That's up to you. And that is not a trivial process to do. These are closely related to high availability. I put a question mark in the cloud because they say they definitely claim it's high availability. But there's caveats there and read more about it, read the fine print. I would argue that it's not really high availability. So that's why I have a check under the right column view. The cloud does provide upgrading my SQL, but they don't really automate it unless you just let them run it at during their maintenance window. They provide backup recovery, which is nice. But then the remaining four items CDC's change data capture, which is streaming out data changes, usually to a data lake. They don't do that. They don't do security. They essentially don't help you with my SQL related things. It depends on your level of support, but you're probably also not going to get a my SQL DBA or export if you contact their support. And of course cost. That's up to you to keep costs under control. Speaking of cost, the last idea and point is that in the cloud performance is money. You pay for everything. You even have to, for example, provision IOPS and then you pay for those IOPS and also be aware that yes, it can be easy to scale up the database instance. But costs tend to double for each level. So if you go up the next level, your cost double and if you go up another level, then your cost have got tripled. So in the cloud performance, again, going back to the fact that performance is query response time is imperative because you pay for that performance. So really, if you remember this crowd at slide from the beginning, we say when we when we want better performance, we want my SQL to execute more queries in the same amount of time with the same hardware. To do that, we have to optimize those queries focusing on query response time because that's the metric that we experience and it's something we can do. And in the cloud, we can also add that we want to do this without it costing a fortune. So very brief presentation. I wish I had hours to cover everything. But that is why I recently wrote the book because all of this and a lot more is covered in the book. But let's summarize and talk about some next steps for you. So in summary, first and main idea, first step along the path is that performance is query response time. We optimize queries to reduce response time. And that is what truly increases my SQL performance. The application is what limits my SQL performance as a result of its access patterns. So if you ever think, oh, is my SQL being slow, that's not likely the case. It's more likely that the act that the application is causing my SQL to run slowly. Start with that premise and then, you know, hopefully trust that the DBAs will tell you if that is not actually the case that in the rare, rare cases that maybe something is affecting my SQL and it's not the application affecting my SQL. And the cloud, please be aware that cloud providers are not substitute for DBAs. So hire or contract help if needed. So to avoid outages, because if my SQL is down, well, that's kind of the worst performance of all, right? No response at all. Performance in the cloud requires even stricter attention to these details and every detail because you pay for every bit of that performance. And finally, again, the path to my SQL performance begins with query response time. So next steps. I know I breezed through, well, it seems like a lot of information, but it also seems like not a lot of information. This is why I recently wrote a book on the subject. And it's no surprise that for the next steps, these are the first eight chapters in my book. And they are also what I would suggest to you to learn more about as you go along the path from, you know, trying to reach that shining star, that north star of performance. So start with query response time and query metrics, then learn about indexes and indexing, then learn about data, especially with respect to keeping data size under control and efficient data access, then learn about access patterns. You know, make your application, engineer your application as a Ferrari for high performance in every detail. Then you'll probably have to learn about sharding scaling out my SQL. Then you need to learn about server metrics. Then you should learn about replication lag. Why? Because the risk there is data loss. And then you should learn about transactions and row locking because that is a fundamental part of a transactional data store. So that is the path that I have laid out when I'm teaching engineers how to achieve better performance with my SQL. I would highly encourage you to read the latest books. Start with what O'Reilly has published. For example, just recently they published three updated books about my SQL. Again, there's my book, Efficient My SQL Performance. There is also the fourth edition of High Performance My SQL, and there is also the second edition of Learning My SQL, all published by O'Reilly. Highly suggest that you read them and they will help you learn these aspects along the path. Then also read the relevant sections of the MySQL manual. That is the sections of the MySQL manual related to the things that you read in these books. You can also go to my website hackmysql.com slash help where I keep a list of these things, publishers, blogs, etc, etc. And finally, the best thing to do is get in there and try it. Start learning and doing these things. Begin with query metrics. Find the slowest queries. Explain those queries. Start going along these path, learning these things. Optimize those queries. Reduce query response time, and MySQL performance will improve. So thank you for attending this session. Thank you for attending FOSM 2022, and I hope you have a great rest of your day.
This presentation introduces the major topics and most important points of the new book Efficient MySQL Performance published by O’Reilly (Dec. 2021). And like the book, this presentation is intended for software engineers using MySQL, not managing it (but DBAs are gladly welcomed). In 40 minutes, we’ll quickly cover topics including query metrics, indexes and indexing, data access patterns, server metrics, and more. At the end, you’ll have a “lay of the land” for understanding the aspects of MySQL that affect performance, and you’ll understand where to go and what to learn next to increase performance.
10.5446/56943 (DOI)
Hello, I'm Kenny Grip, MySQL product manager. Hey, I'm Miguel Roussoul, I'm a software engineer at MySQL Shell Team. And today we'll be talking about MySQL inodb cluster set, which is our disaster recovery solution that we recently introduced. But before we start that, I'd like to talk a bit about the past, you know, what MySQL historically had. And if you look at the past for MySQL, MySQL has been, MySQL replication by itself has been very popular for a very long time. It's been used by many companies. And historically, MySQL setting up a MySQL database architecture, which consists of one or more machines like a primary and multiple replicas. And there's all kinds of architect architectures available. So you can have like a primary as displayed here and two secondaries. You can also have multi tiered replication. There's even circular replication. There's all kinds of replication topologies possible. But that was really good because there's a lot of flexibility. So users of MySQL replication can make any topology they want, basically very flexible. But it was actually quite tedious. There was a lot of tasks that had to be done, like setting up a replica needed you to restore a backup. You have to kind of do, restore a backup. You have to create a user, configure replication and so on. So there's a lot of things. And what MySQL did, MySQL had this flexibility and it gave you those technical pieces, but it's up to the user to then basically customize or basically automate this or integrate this into their stack. So a lot of work was needed for DVAs to manage their environment and set up this environment. So that's one thing. And a lot of people have a similar environment and they still had to do the same automation because, yeah, MySQL did not provide these things. And there were third party applications, both open source and commercial available that helped with automating some part of it, like the failover part or setting up of the database servers and so on. But it was never a fully integrated thing. So if we go to the next slide, please, Miguel, thank you, to the present or kind of present. So about five, six years ago, MySQL in a DB cluster was introduced. And MySQL in a DB cluster is a different way of looking at things like we have in the past. And what we have is a set of technologies that's called MySQL in a DB cluster as part of. Obviously, MySQL server, multiple MySQL servers, but we added MySQL shell, which is a new command line or new back then, it was a new command line interface to help to, it's just a new shell to communicate, to connect to the database, do kinds of all kinds of operations, but also to do manage this database architecture, make it a lot easier to either a JavaScript or a Python interface. We also have MySQL router, which is kind of the load balancer or the routing that routes the traffic from the application to the cluster. So it kind of makes it give you one port for reads, one port for writes, and the application connects to that port and then doesn't need to care about anything else. So it was very easy for, it's very easy for the application. It doesn't need to know which one is the primary, doesn't need to follow the primary, router takes care of everything for that. There's also features that we added to the server, like in a DB clone functionality, which allows us to provision a secondary automatically, basically clone from, take a snapshot from another machine and provision it all fully automated and integrated. No need to log in basically on MySQL server anymore. So you don't have to have a shell access to do kind of operations. It all happens through MySQL connection. You can get this clone done, opens other MySQL connections to get it from the donor and then get everything provisioned. So this whole thing is very much easy to use, but gives also a lot of, meets also kind of a lot of business requirements. And we introduced group replication, MySQL group replication. And group replication is a different way of replicating. So the traditionally MySQL replication has been asynchronous replication. There's been this thing called semi synchronous replication, which is kind of like a plugin on top of that. And group replication basically adds a back source based replication architecture that allows us to have consistency. We have network partition handling. So we can guarantee no data loss in the event of the primary failing. So we have a majority that always acknowledges and receives the transaction before a transaction will be committed. There's automatically member changes, even the provisioning of a new member. It's all integrated into MySQL group replication. So inodb cluster is the integration or the solution built on top of all these things and make it all very easy to use. And Miguel will show some demos today on how to set up MySQL inodb cluster, which is part of MySQL inodb cluster set. So that was back in 2016. In 2020, we introduced MySQL inodb replica set, which is basically the same as inodb cluster, but instead of using group replication, which has all these guarantees, no data loss, no automatic failover, we introduced it using asynchronous replication. So it does not have no data loss because the replication is asynchronous. The failover is manual in this case. So why would we have a solution like this? Well, the thing is, is that we have some customers that do not have these business requirements. Data loss is OK. And maybe the network is not so stable because with group application, you have to have a very stable network. If you don't have that, then you'll have a lot of like nodes that are removed from the cluster, are being joined into the cluster again. And that's overhead and that can cause problems. And with inodb replica set, they have a solution for that as well. Or just don't have the requirements. So that was 2020. And now the next slide, please, is MySQL inodb cluster set. And we introduced this end of last year. And what this is, is basically we have MySQL inodb cluster, which provides high availability. But we don't have, like inodb cluster can serve as a disaster recovery solution. But that's not what most of our users or customers have requested. So we want a way that we have high availability, no data loss within one data center, for example. As you can see here, we have Rome and Brussels. So Rome is the primary cluster, meaning that we have inodb cluster running their group application, no data loss when there's a failure. But when there's a disaster and the data center fails, Brussels can take over. But replication is asynchronous between them. So that is another inodb cluster running in Brussels, basically. So this is all fully integrated, easy to use, just as easy almost as the setting up a MySQL inodb cluster. There are some differences and guarantees that we can offer, of course. So no data loss when a failure happens within a data center, so high availability. But for disaster recovery, as it's asynchronous, there is chance on a split brain. There is chance on some data loss. But that seems to meet most of the requirements for most of the customers. Of course, this requires, like because this is asynchronous, it's OK if the network between those two data centers is not very low latency or goes down from time to time because of some maybe it's over the internet, which we don't recommend. But asynchronous application is perfect for this because any troubles with the network or the link between data centers or with Brussels will not affect the primary cluster and the workload. So that's very important. And what we can do with inodb cluster set is you basically create one inodb cluster and then you can add one or more replica clusters to it. And again, what we will do is with one command, you will add another cluster. You will add nodes to any given cluster. It will provision the data itself automatically. Router knows about the complete topology. So there's no need to change anything in router or the application. You can dynamically change the behavior of the router, all of which are to be true, kind of kind of demonstrate today. Part of Miguel, no demo today. So Miguel, yeah, so here's another diagram. Here's an example where we add a third data center, Lisbon. So they are both replica clusters are replicating from the primary cluster. And this is just an example. There is no limit on how many replicas you can have. So that's all what you can have for inodb cluster, which uses group application. There's a maximum, sorry, there's a maximum of nine nodes, but nothing demands that you have a minimum three nodes. Well, we do for high availability. If you want high availability in a certain region, we recommend three or five nodes in one region. But you can have a replica cluster that is one node like in Lisbon. There's just one machine there, which is fine. Maybe that's okay for the business. Maybe that does meet the requirements or there's not a lot of resources available and so on. So there's a lot of flexibility in there. Now let's talk a little bit about the business requirements. So we have two things that I briefly touched on is the recovery point objective and the recovery time objective. So just quickly, so recovery time is that how long does it take to recover from a failure? So that's RTO. And then RTO is how much data can be lost when a failure occurs. So both of these are objectives and these are very important, but we also need to look at the context of those, the type of failure that happens. And with high availability, we can have a different RTO than with this disaster recovery. The business might not need a no data loss in case of any event. It might be fine that there's some data loss when a disaster happens versus when there's a single server failing for high availability. So we need to look at both of them. And then of course we have a human error, which also is a type of failure, which in other clusters that does not resolve, of course, but what if somebody drops a table or a database by accident or an application but deletes more data. So you need to recover from a backup and you need to replay the bunny logs to do a kind of a point in time recovery. How much data loss can be there? So RPO of zero, obtaining RPO of zero with that is actually very quite difficult. So all these RPO and RTO kind of depends on the type of failure as well. So it's important to ask the business kind of what does the business need and what is the minimum we need, of course. Now to go to some recommendations, right, when to use what. And if we look at a single region, so high availability, so in ODB cluster, when a failure happens, there will be no data loss. So it's single failure and a single failure means the primary failing. If the primary fails, a secondary will promote itself, the group will promote the secondary and then it will take over and there will be no data loss. The failover is also measured in seconds. So I think the minimum is five seconds. That's the minimum basically. And it's configurable. There's a lot of knobs you can tune to make this higher depending on your environment and so on. If you have these needs, then that is the best solution. If you do not need no data loss and failover can be manual, you can use MySQL in a DbReplication. You have the added advantage that it's asynchronous. So when you do a write to the primary, it does not need to get an acknowledgement from the majority that the transaction is okay received by the other nodes and it can continue. So it is better for write performance, this asynchronous. So it comes at a cost, of course. So some data loss is possible. My failover in this is manual. So there's no automated failover built into the system. That is something that the user has to automate. But in general, we recommend if you need automatic failover, use MySQL in a DbPluster. So this is within a single region. Now if we think about multiple regions, we have multiple solutions and one that we don't demonstrate today, but that is possible. And there are users and customers that we have that use this is to deploy MySQL in a DbPluster across multiple regions. And that way, there is a possibility to achieve RPO0 and a failover in seconds with MySQL in a DbPluster, so with high availability and disaster recovery. In this example, there's three data center and each has one node. You can also put two nodes in each data center. That is six. It's an even number, which might be a problem for quorum for the network partition handling. But in general, it's not a problem because it's unlikely that there will be three members isolated. It will usually be one region or two regions that are kind of partitioned from each other. So a majority will almost always be found. So that's good. You can have automatic failover within a region. No data loss if you design this well. But the downside is you need three data centers. If you only have two data centers, you can put two nodes in one data center, two nodes in another data center. But that's not going to help you because if there's a network partition, you need a majority. The majority group, the majority partition will elect a new primary if it doesn't have the primary. But if you have two, two, and you have a network partition, then basically none of them will accept reads because there's no majority. That's why you need three regions basically. And you need a very stable network for this. And then we have the added write latency because there will be added latency because this is a cross-wide area network. The write performance will also be affected. So we have users of this, but the majority of the users want what's next is the MySQL InnoDB cluster set, which is basically the high availability, very good guarantees, automatic failover and whatnot. But for disaster recovery, which is less likely to happen, and if it happens, it affects more than just the database. It's not about just the primary server failing or the group failing, all the database is failing at once. It usually also means that the whole application is impacted. The whole stack is impacted as well. So manual failover in that case is what we provide. Now manual meaning, it is a single command to do this failover. And if you want, and if you need to make this automated, you can automate it yourself and just execute that command. There's no holding you back. But we cannot guarantee that there's no split brain or anything. That's why it's a manual command. So it's up to you as a user to automate and figure out all these edge cases. But we do add features, and we'll also show today some of the new features we introduced this week or two weeks ago, basically to enhance on cluster set, like we have fencing support, we have some functionality to make it easy to do fencing and so on. So very good. So again, when a region fails, no data loss cannot be guaranteed, but it does from within a single region. So my school, in ODB clusters. So next slide, please. Now we're going into demo. So I think the rest of the presentation, we're going to show how it works or Miguel will show. And for that, the demo environment has three hope. We took three hosts, and each host is running three instances to kind of kind of simulate three different, two different data centers. And if you want to do the exercises yourself or actually see and do it yourself, you can download it on Miguel's GitHub page. I don't know if you next slide, I think what we're going to do first is do the initial setup. So Miguel will go and it's all new Miguel now to create the cluster and get everything up and running. Yep. All right. Okay. Let me start Shell using 8028, the one released a couple of weeks ago with the new features that Kenny was talking about. So I'm connected to one of the hosts, Rome. We have deployed three instances there. So I'm going to start by creating a standalone NinoDB cluster. This might sound or look repetitive for some of you, but maybe for other people not. So I'll prefer to do it from scratch. So let's start. I have an admin account and I connect to the instance running on 3331. And first thing I will create a cluster using the admin API, the create cluster command. I'll call it Rome and Shell will just create a new entity in the metadata schema, perform as a bunch of validations, set a bunch of settings needed and start group application and create our cluster object, Rome. You can see the status. I only have one instance that is the primary. So when you create a cluster, the first one on which the instance that you pick to create a cluster is going to be the primary. Next I'll add some instances. I'll add another one, Rome 3332. Okay. Now we have to do provisioning of data and Shell will try to understand, to check your status of your cluster and determine the best provisioning method. Method could be clone based provisioning if available or could be incremental recovery that is based on binary logs. The clone is the recommended and it's actually the default method. So we'll perform, we'll use clone. It will do the physical snapshots, transfer it over the network. Shell will handle everything, creating the accounts, the configurations to do clone, monitor it. It's going to be really fast because there's no data. And Rome 3332 was added and let's add another one. I can use incremental just to show you here. We also do the monetization. Okay. It's done. Okay. So we have our cluster in Rome with three instances. 3331 is the primary and the other two are the secondaries. Next step, we need to deploy a router on this cluster. So I'll create an account for router. We also have a command in the enemy API to create an account that is called setup router account and this will create an account with a minimum set of privileges to operate on router. So let's name it router admin, some password. Okay. The account is created. And now we have to bootstrap. So let me, oops, let me create here. So to bootstrap, we use the bootstrap option. We need the admin account of the cluster, cluster admin at Rome 3331. I will use, I want to deploy router in the directory. So I'll call it router Rome. I'll give it a name, name router Rome and I'll use the account, the account that I just created, the router admin and I will tell it to report it. So it says Rome. Okay. Password. Okay. So the bootstrap of router means the router will attempt to connect to the cluster. We'll obtain fetch the information from the metadata so it can know the servers that belong to the cluster. We'll register those servers as metadata servers and we'll then populate the configuration and it created here this directory. It's router Rome with the configuration, the logs and the start and stop scripts. So I will just start my router. Okay. And I will show you the log. Here you can see in the log the router fetching information from the metadata servers, it detects everything. There's some info about the default routing strategy, round robin and the router is up and running. We can see we have a command list routers that will show you all the routers registered in your cluster. So you can see here our router Rome registered here. The router performs updates for checking in in the cluster. We'll give that information here and you have the ports on where the router is running, the ports for read-only traffic and the ports for read-write traffic. And now we have implemented two simple applications to do traffic on router using the, I'll just quickly show you using the connector Python, my school connector Python. So we just establish a connection to the router, this one on the read-write ports. We do a query to select the host and the ports to know to which member are we connected to. And then we just create the scheme if it doesn't exist and we insert a bunch of the random data and then we print some, some stats about the queries. So let me put it running. So here's our app running on the read-write ports. As you can see, the traffic is going to Rome 3331, that is the primary. And we also have an app to do just read-only traffic. And this one can see it's doing around Robin on the secondary members 3332 and 3333. So let me show you the status of the cluster. Here's cluster with our three members, all good, up and running. And this is it for the standalone cluster. So next we're going to create a cluster set on this cluster. So this one will become the primary cluster. And like Kenny was saying, just a command and we have our cluster set created. So command is create cluster set. We're going to give it a name. Let's call it cluster set. And Shell will do a bunch of validations again to see if the cluster is compliant for cluster set. It will create a new entity in the metadata, set a bunch of configurations and we'll return a new object, this cluster set object. And we can see the status. And we can see now that our cluster set has just one cluster. The ROM cluster, the primary one, it has here the indication of the primary member of the primary cluster. It's healthy, just basic information about our cluster set. We have an option called extended that we can use to get more information about the cluster set status. Using the extended one, we get information about all the clusters and also general information about the cluster set. Okay. So this is just, we only have the primary cluster. So the next step will be to, I forgot to tell you, as you can see here on the router log, router detected that now this cluster is part of a cluster set. And there's an indication that the router was not bootstrapped to use cluster set. What does this mean? You can also, let me also show you, there's also a list routers command for cluster set, which will list the routers running in this cluster set. And there's a warning here saying router needs to be rebootstrapped. Why? This is because when you bootstrap a router in a standalone cluster, there are some settings that are different from when you bootstrap a router in a cluster set. The main one, it's the TTL. So the time, the interval of time that router fetches the metadata. And by default, router uses half second. And that's a very low value in cluster sets because you may have tons of replica clusters in different data centers, there's network latency and so on. So we have decided that we should increase that value and we increased it for five seconds. And at the same time, we also enabled by default the group replication notifications. So group replication will use the X protocol and will send notifications to router whenever topology change happens. So with that, we could increase the TTL of router. So what I'm going to do now is to rebootstrap the router using the force option because it was already bootstrapped. I will rebootstrap it and I will stop and restart it. Okay. You can see here in the log that router is enabling GR notifications. And if I run again the listRouter's command, the warning is gone. So the router is fully operational for cluster set. Okay. Next, let's create a replica cluster. So let's create the first one in Brussels. So using the create replica cluster commands, I will create a cluster in replica cluster in Brussels and port 4441. I'll call it bra. And again, Shell will determine the best method for provisioning. This clone, it will take the snapshot, apply it on target, create a new cluster there. And then I will show you. Clone is gone. And then, okay, here this information, what Shell also does is to set up this cluster to be a replica cluster in the cluster set. So it will set up the asynchronous replication channel. Now we saw before in the slides. This is a managed replication channel. So any topology change in regards to the primary of the primary cluster or the primary of the replica cluster will be automatically resinked. So the channel is always established from the primary member of the primary cluster to the primary member of the replica cluster. So the primary member of the primary cluster is the source and the other one is a replica. And whenever a failover happens or even a switchover of the primary happens, the channel will automatically reestablish. So users don't need to worry about this asynchronous channel. So Shell will do that. We'll wait for the transactions to synchronize, apply settings and so on. And we have now a new replica cluster in Brussels. You can see in the extended, now we have now the information of our primary room here with the three instances and Brussels, the new replica cluster with only one instance now, the one that I picked to be a primary. And as you can see, the primary is read only. This is another thing that Shell takes care of. All replica clusters are read only. And we use the group application member actions. One of the member actions that we use is the one to keep super read only enabled on all members regardless if a topology change happens, regardless if a auto failover happens. So all members will be kept in super read only mode. You can see here also information about the replication channel. Receiver, Brussels for one, that is the primary member of the replica cluster. And the source room 3331, that is the primary member of the primary cluster. Also I have here the Brussels cluster. If I check its status, it will have here information that this is a replica cluster and the cluster set replication status is okay. So now I will add members to Brussels, 4442 using clone. As you can see, the operations on the clusters are the same. Independently, it doesn't matter if the cluster belongs to a cluster set or not. Operations and the commands are pretty much the same. So you just add instances, remove instances, status, rejoin and so on. It's all the same. We use the same API. I will add another member, so we have three. Okay. Almost there. Okay. I forgot to show something before. That is the described command. This will show you the topology. It's quite useful if you want to see the description of the topology of our cluster set without status. So you can see here the list of clusters, Brussels, Rome, the role, replica, primary, and the instances that belong to each one. And so the other cluster, we're going to create another one in Lisbon. So Lisbon, same thing, create replica cluster, Lisbon 5551. There are several options. Minoala, I'll create this so I can talk a little bit about it. So there are several options for the create replica cluster commands, which are similar. It's a subset of the options of create cluster. So we can pick the SSL mode that you want, member weight, auto rejoin attempts, several configurations of group application. The consistency level, you can pick it all using, okay, what I want is to have an instance. You can pick all those settings in create replica sets. Likewise you do when you create a standalone cluster. We have a new feature introduced in 8028. That is to select the subnet desired for the internal replication accounts of in a cluster. Some people are interested in that due to security reasons. You can now pick the subnet that you want your internal replication accounts to be created on. This is available in 8028. Using the last instance to Lisbon. And that's it. We have now our cluster set with three clusters, one primary, two replicas. We can see the description with all the instances. You can see extended status. We also have three levels of extended. Extended two will give you more information about fence, Cs, vars, the IDs, the number of applied worker threads. It will give you the information about the transaction set, consistency status. It will give you also more information about the fence, Cs, vars. And in the replication channel, the cluster set replication channel, you have also more information about it, such as the receiver status, the timestamps, the status of the threads and so on. And you also have extended three, which will give you even more information about replication, the threads and the GTID and so on. And yeah, that's it for the setup of the cluster sets. Quite simple. As you can see, just very simple commands. And let's just recap back to you, Kenny. Oh yeah, thanks, Miguel. So what Miguel did was set up the cluster. And I don't know, can you go back a little bit, one slide, please? Oh, sorry. Yeah, one. Yeah, thank you. Yeah, so we've set up the cluster set and then we bootstrap router. So now what I want to talk about a little bit is kind of the operational part. So the setup is simple. You've, as you've seen, it's just a couple of commands and they have a whole database architecture running, nine databases in this case and one router. But now about the next thing, right? If the management of it, like how do you change a primary? How do, and then we're not talking about failure cases. This is just planned switch over, basically. So what we have here is like, let's talk about how to change a primary cluster on a healthy system, meaning that I have a replica cluster. I want to make my replica cluster the primary. So I want to promote Brussels to be the primary and then Rome. So it's just a single command. So you change set primary cluster and the whole architecture will be reconfigured and the routers will automatically learn about this change and then redirect depending on their configuration. So everything is guaranteed in terms of consistency. If it's a switch over, this is a planning, it's a switch over. What we will do is like, we'll put the primary cluster read only, make sure the replica is in sync and then change the replication configuration between all the replicas, replicators, asynchronous replications configured to Elizabeth as well, reconfigure all that and then make the primary in Brussels read write and router will then reconnect its write traffic to Brussels. And there's some configurations and router that we will touch on later, but that's basically the gist of it. That's kind of what happens, what's being shown. I don't know if there's another slide. Miguel? Yeah. So here's the example on what happens. And we put it in a kind of a purple boxes, the set primary cluster command. So all the changes that happened and you can see we're promoting Brussels to be the primary. So yeah, up to you to show this. Miguel? Yeah, but yeah. Okay. So before changing the primary cluster, the switch over at cluster set level, I would like to show the switch over at cluster level. So changing the primary member of each individual cluster and see and let's see how the whole cluster set reacts to that and routed as well. So let's for example, change, let's check the status of our cluster Brussels. Our primary is this one, 4441. And like I told you before, the primary is the receiver of the asynchronous replication channel. The receiver is the replica and the source is the primary member of the primary cluster. So let's change using set primary instance. Let's do a plan switch over on the primary member of the replica cluster. Let's pick another one, for example, Brussels 4442. Okay. And shall we'll handle everything. We'll do the switch over of the primary instance and then we'll do the switch over of the channel as well. I will show you the status. And as you can see, the primary has changed to 4442. And if we check the status of our cluster set using extended, we can see here on Brussels, here it is, our information about the cluster set replication channel. We can see that the receiver is now the newly elected primary and the source keeps being room 3331. So this is all automatic. It happens on plan switchovers and it happens on auto automatic failovers of group replication. It's all automatically handled by this managed asynchronous replication channel. Now the interesting parts will be to change the primary member of the primary cluster and see how router reacts because as you can see here, router is using the primary cluster. So we're sending traffic to the primary member of the primary cluster. And let's see what happens if we change the primary instance of room to room 3332. Okay. First thing here on the log of the router, there's a notification that's the primary change and as you can see, router is now using room 3332, has the target for traffic, read write traffic and let's take a look at the cluster set status, extended one. And what we expect now is that the replication channel was updated accordingly in all the replica clusters. For example, here on Lisbon, the receiver is Lisbon 5551 and the source is now the new primary in room. Same for Brussels. So all automatic. So this was about each individual cluster changing the primary of each individual cluster. And now let's show you, let me show you the switchover of the primary cluster in the cluster set. For that, we have also a very simple command. It's all very intuitive. It's called set primary cluster and let's pick Brussels to be the primary. And it was really fast. The admin API now will do a bunch of things. It starts by verifying the status of all clusters. It verifies the transaction sets. If they're okay, it waits for the transactions to synchronize and then we'll update the topology in the whole cluster set. We'll change the managed replication channels accordingly. You can see here that information changing replication source of room 3331 to Brussels because Brussels has become the primary cluster now. This is all done with locks to ensure that the operations complete successfully. And yeah, it's done. You can see now that the router is already sending traffic to Brussels for 42. That is a new primary. And also sending with the only traffic to Brussels as well. Let's take a look at the status of the cluster set. Here's Brussels as primary. Lisbon replica, room replica. And that's it. Very easy, very simple. No big deal. So let's rip cap again and go to the next topic. Yeah, so thanks. So yeah, this is all planned changes, right? And we'll go to failure scenarios at the end of the demo. But before that, we need to talk a little bit about router and what options are there in router. So in the demo, Miguel configured one single router in Rome. But in reality, we have customers using tens or hundreds of routers even on a single database architecture. So router is actually quite a lightweight application and one of the most deployed ways and recommended ways that you should deploy a router on near closest to the application, even on running on the application server itself. Now there's also the possibility to have a separate layer of routers. But then you need to kind of like have some load balancing and some failover on router itself. So if you have a highly available application, then run the router on all of these application servers and then that becomes a less of an issue with latency that is added and so on. So we support many, many routers. And that's why Miguel touched on briefly as well as the GR notifications where instead of router polling every half a second, which is the default, the status of the cluster. And he's just taken the status of two clusters, six members here or nine members in what Miguel has deployed. So with the export of call notifications, the export of call sends notifications to the router that the topology has changed. So there's instead of half a second, they only checks the status every 15 seconds or something like that. That's, I think that's the default then it's all configurable, but with the notifications. But we, the routers immediately get notified if there's a change in topology. So it's even faster than a half a second then. So router quite lightweight, so you can have many routers, but he also with the introduction of MySQL and new cluster, it becomes a bit more complex. What do we do with router? Where does router connect to? And there's a couple of options that we have now and what can be done. So you can see here in the diagram that there's green, which is reads and then red is reads and writes. So MySQL router is a TCP level load balancer. So it, you need a port for write traffic and then a port for retraffic, which gets load balance to all the secondaries. Now what we have is different modes in router that we introduced. And it with clusters that we have target modes and the default target, what we're demonstrating today is what we have by default is the primary, follow the primary. And so target cluster primary, which means that reads and writes, so the router entirely will always connect to the primary cluster. And we have different routers here. We have four routers on this diagram and the ones on in Rome, there's one target Rome and target one target primary and Brussels, we have one target Brussels and one target primary. So the ones that are target primary will always send reads and writes to the primary. The ones with target Rome will send reads and in this case, it's a reporting application, so it's not performing any writes. It's opening read port or read connections. It will redirect it to the Rome cluster, but the same with Brussels. So you can have a router that opens a port that gives access to read from the local databases. So the reporting application in Brussels will just connect to the Brussels databases. It does not need to go to the primary, which is Rome in this case. So you can kind of configure it per cluster. You have a global setting, you have a sort of default, and then you have a per router setting. And one thing is that even if target Rome plus the router there in Rome on the left, it also accepts writes. But the router that is target Brussels does not accept writes. So the port for writes is closed. Now when Brussels takes over and becomes primary cluster, then the right port will be open automatically and the one that has target Rome will be closed. So all of this can be changed online. There's no restart of router. You don't have to log in on router. Everything happens through this metadata schema that we use, and router does check the metadata schema for changes. And it's all happening from within MySQL Shell. So you have a shell, you connect to a database, and then the rest happens automatically. So I think that's we're going to show here in three data centers. So we have the same here, if we add Lisbon to the mix here, is that if you have a router and Lisbon with target primary, it goes to the primary and so on. So three data centers, same deal. So up to you, Miguel to show. Yep, let's see this happening. Okay, first, let me show you the routing options with a routing options command. You can see here in the cluster sets our, since we haven't changed anything, we're using the defaults. So for we have the global routing options for the whole cluster set. And there are two of them, the target cluster that Kenny was talking about by default is to follow the primary. And the other one is the invalidated cluster policy. And this is the policy that is triggered when a cluster becomes invalidated. We will talk about invalidation of clusters later on. Right now what I wanted to show is the target cluster option. So right now, our, just to show you, our primary is Brussels. So the traffic is going there in the router. But if we change the routing policy, the target cluster one, and let's say we want the target cluster to be room, but the room is a replica cluster, right? So we change it and you will see that the router now is not accepting any rights traffic on a, on room because it's a replica cluster, but the read only traffic is going there. So we can use this option like Kenny was saying for locality purposes, if you want your router to only operate in that, in that specific cluster, you can use this, this option. So now let's perform again, a switchover of the primary. Let's change the primary cluster and let's change it to room. So since the targets cluster up, the routing option is Rome. Now with the, with a change of the primary cluster to Rome, the router should accept back rights in it. And as you can see here, the app is already showing that the right traffic is going to room because Rome has been promoted the primary cluster. And let me show you the option routing options. Okay, so I could change back to, to the default target cluster to follow the primary. And this is how you do it. So the, the command accepts two arguments. The first one is the name of the option and the second one, it's either primary that is to follow the primary or the name of the, the target cluster that you want to use. And that's it. Back to, so with this, we complete the happy path. And now we should talk about failures. Yeah. So, so after this router, now let's do some failure scenarios. And so there's a couple of things. And some things are handled automatically. Some things are manual. And here's one that is automatic. So there's no interaction needed, nothing to be done. So when a primary cluster, the primary fails in there, then the kind of asynchronous application is broken. A new primary needs to be elected. And that happens automatically. So if you, if you kill the primary, secondary will take over. And then, and then if you can show the next slide, you know, yeah. So if that happens, it's automatically the primary will become one of the secondaries will become primary asynchronous application will be automatically reconfigured. So all these are features that we built in into the server. And so this is all part of the asynchronous application connection settings. So if you go back, Miguel, please. Yeah. Yeah. So you can see that that all the, so this, this, the, actually the replica, the primary member in the replica cluster has this replication configuration, but the replication is actually monitoring all nodes and it's monitoring their member and the membership in the group. And if members are added and removed, it automatically learns about them. So if there's a new primary, it will just reconnect and re-initiate replication from there. And it's also using, of course, GTID based replication, which makes this all possible. So we're really using all of the features in the server. And we had to add quite a lot of features in the server in order to, to, to make this work. And then the same happens when, when a, that's on the next slide is that when the, when the primary crashes in the replica cluster, replication is gone, right? Because the replication channels only configured on one machine. So there were changes in group replication that basically when the new primary gets selected, that replication would also automatically start or be configured and start. So then the prime, the new primary in the replica cluster would configure start replication and continue like that, all fully automatic as well. So nothing, no need. Shell doesn't need to do anything. So there's nothing, nothing to be done. It's all within the server. The architecture arranges it by itself. No monitoring nodes, making any decisions. So it's all fully integrated. And then we have the, and in this case, we want to kind of demonstrate the network partition. So this is one of the failures where, well, there's various things that can happen, right? With disaster recovery, it could be that whole room goes away, like is on fire, it's gone. Everything dies, is not accessible. It's also possible that the Rome databases are failing for some reason. Maybe they were put in the same rack and the rack goes down. The applications are still there, but the database is gone. That's another scenario. But the scenario that we want to show here is a network partition, meaning that Rome is still running, the machines are running, but the link to the internet is gone. So there's no way for, let's assume this is some e-commerce website, and there's no users that can log in anymore and order anything. It's not accessible. Rome is not accessible anymore. So what we have here in this situation is that we have a primary cluster and a replica cluster. So Rome still thinks it's primary and it is still primary, but traffic is not arriving from the application. So nothing comes in anymore. And also from Brussels, they're our application service. They cannot access Rome anymore. So what we need to do is we need to promote the replica cluster. Now before we do that, we need to think about fencing. And fencing is a feature that we introduced back here in January. And what fencing does is that we can, if you have access to this partitioned database, the primary, let's fence it first. Being like, let's prevent rights on this cluster. So to reduce the split brain that we have. And it's a single command that you just log in using shell in Rome and say, okay, let's fence yourself. And with what fencing will do, there's two options. We have fence rights and fence all traffic. With fence rights, the router will no longer accept rights and it will still accept read traffic. And with fence all traffic, everything will be stopped. The group application will actually be stopped altogether and router will not send reads, not rights. So you can kind of choose it. If you do fence rights, you can actually unfence it. So you can bring it back. So if the network partition is resolved and you haven't failed over to another replica cluster yet, you could just unfence it and everything will continue again. So all automatic, so just one command to do it and to revert from it as well. So quite easy to use. And then what you do as next step is, that's on the next slide, is you promote. The replica cluster, you do not set primary cluster, you do force primary cluster, you do failover instead of a switchover, where you force Brussels to become the primary. And yeah, other replicas, for example, the Lisbon one will also be automatically reconfigured and start replicating from Brussels. And that's what you're going to demonstrate now, I suppose, Miguel. Oh, maybe one thing. Yeah, you're right. This next slide is router integration. Router will learn about this topology, redirect traffic as needed. And there's some quirks to it because when the network partition is lifted, router will automatically see, whoa, I was connected to a fence or not even fence system, but I see that Brussels is the new primary. It's newer. So it automatically redirects all its traffic to the new primary. And that will happen automatically. No action needed on any routers, nothing, it will just happen automatically. Okay, let's show it up. So Kenny was talking about failures at instance level first. So let me show you a failure on the, for example, the primary member of Rome, that is a primary cluster. Yeah. So I have it. I'm here connected to Rome. Let me see, okay, the primary member of Rome is 332. So let me kill it, kill the process that is not here. Here that was Brussels. Let me kill Rome. 333314708. Kill the process back to my SQL shell. Okay, let me get the cluster set of checked. And let's see the status. As you can see here in the logs of the app, there was a failure. That was when the group application was performing the auto fillover. A new primary was elected, Rome 3331. And now I want to show you the status of the replication channel. And for example, here in Lisbon, you can see that the source is Rome 3311. That is the new primary member elected in Rome. So this all happened automatically. This is all built in in group application. The replication channel has the information of the group topology, the membership info, and it will act accordingly and will reestablish the connection whenever failover happens. And this is for both ends, for the source and the replica end. Yeah, so the next scenario, disaster scenario was the network partition. So we're going to isolate Rome from Lisbon and Brussels. Brussels, okay. So Rome is isolated from Lisbon and Brussels. Let me start Shell. Okay, let me connect cluster admin. Let me get my cluster set object. This is taking a little bit more than usual because Shell is attempting to connect to Lisbon and Brussels and there's a timeout. So that's why it takes a little bit more. So checking the status of our cluster set. Again, the timeouts. Just a few seconds. Okay. As you can see, Brussels is unavailable from Rome. We cannot connect to any member of the status is unreachable. Same for Lisbon and our primary Rome, we can access it. We're on that partition. And there's an information here that I use with two replica clusters. So now if I go to Brussels and I start the Shell here and I connect to one. If I get my cluster set object from here. Now what I will see is that Rome is unreachable and here. So Rome unreachable. We cannot connect to any. It's on the other network partition. And our two replica clusters, Brussels and Lisbon, have lost the replication channel. So the information here that Shell is giving us is that the primary cluster is not reachable. So we assume it's unavailable. And our cluster set status in general now is unavailable. We lost our primary, primary cluster. So now what we're going to do is to fence Rome from write traffic. So for that, you get the cluster object. And we're going to use the very simple fence writes command and Shell will take care of enabling the super read only management to ensure that all members are super read only, even if a failover of the primary happens and will enable super read only on all members. And as you can see, router now cannot send any more write traffic to Rome. We're fenced to writes. And then we can now elect to perform the failover, the manual failover. And our cluster set and elect one of the replicas to become primary. So let's elect Brussels to become our primary cluster using the force primary cluster command. And this will do several things, verify the status, synchronize transactions if needed. And then we'll update the replication channel on all the replica clusters to replicate to the new primary that is Brussels. And on top of this, we'll reconcile the GTID sets. And this is something we haven't talked before. But there's in group application, whenever a topology change happens, if a member, if the group is bootstrapped, or if a member joins a group, or auto rejoins a group, group application generates a view change transaction. And those transactions are a problem because they will become error transactions if this happens in a replica cluster. And to overcome all of that, there's a new feature in group application that is to define a UUID for those view change transactions only. And in Shell, we also take care of generating a new ID, setting it in the whole cluster set. So then whenever we do a force failover or a switchover, we take care of the reconciliation of the GTIDs to ensure we have no trouble regarding those. This also happens here. And showing the status, here our cluster Brussels has become the primary. And the ROM, the one that was the old primary, is unreachable. And as you can see, there's a warning saying that the cluster was invalidated. So this cluster is invalidated. And if the network partition is restored, this cluster won't rejoin the cluster because it became invalidated. So you will have to perform a manual rejoin of it using also the rejoin cluster command. But you may have error transactions. So we'll have to handle that and so on. So yeah, this ends. We're getting out of time. Yeah, thank you. There's Narius. Miguel, so let's see this. If there is any time, I'm not sure. I think we've just ran out. Yeah. I think there's some time for questions. But thank you very much. This is the cluster. You can check our website, dev.mysco.com. And there's a whole documentation on it, which all explains this very nicely and how this works. Or reach out to us on Slack or anyway, thank you. Thank you. Thanks.
MySQL InnoDB ClusterSet brings multi-datacenter capabilities to our High Availability solutions and makes it very easy to set up a disaster recovery architecture. Think multiple MySQL InnoDB Clusters into one single database architecture, fully managed from MySQL Shell, and with full MySQL Router integration to make it easy to access the entire architecture. In this session, we will cover the various use-cases and features of InnoDB ClusterSet while guiding you on how to set it up from an existing InnoDB Cluster, extend it, manage it, and deal with the various possible failures, all using Shell's AdminAPI. We will also cover each individual feature of MySQL Router integration which makes connections to the database architecture easy.
10.5446/56944 (DOI)
Hello, welcome to my talk about my components services and what's new in them and we'll also help and have an interesting demo. So I'm hoping it will be very interesting to you. So let me first introduce myself. My name is Georgi Kodino. People know me in the mass care ecosystem as Gioro. So I've been working with mass care for, well, forever. Yeah, but I still love it. So it is quite interesting. I'm based in Plovedi, Bulgaria. And well, in prior life, I used to work as a banking IT manager. This explains my interest in software development and security, I guess. All right. So first of all, I'd like to do a little bit of refreshing on what the component infrastructure architecture is and kind of give you the bigger picture, introduce the interesting parts into all that. And yeah, then we can talk specifics and most importantly, have a demo. Right. So the mass care server modularization. As you can see, this is in a nutshell how the mass care server binary is extensible. As you may know, it's a single process multi-trained binary. So the only way to extend it is to load new shared object into it. And we can have a number of these objects. One kind of a traditional one for mass care, five, one onwards, I guess, is being called plugins. Now, as you can see, they are depicted here on the left. And the plugins are basically, well, very similar to the PHP plugins, I guess, or the web server plugins. They expose certain interfaces called plugin APIs. This is basically depicted with the arrow on the bottom. And the server can call these plugin APIs. When plugins need to call the server to consume some functionality out of it, basically they can call something else called plugin services. That's the topmost arrow around plugins. Well, it so happens that we have very few of these plugin services. So plugins cannot really do much. But we do have a couple. Why we don't have more is basically because the plugins have access to all the symbols inside the mass care server binary. So they practically don't need the plugin services. They can just call into the server and link to more or less any symbol inside it, which is what it is, but not very good for encapsulation, as you can imagine. This is one of the reasons why we, back in, well, 80, we started looking into a better way of doing componentization in the mass care server. And we call these components. So components are basically containers for code, which can contain implementation of what we call component services. So component service is an abstract interface, more or less, that can have one or many implementations. If a component service does not have an implementation, then it's not really there. Is it right? So this means that basically each component service can have, will have at least one implementation. We unlike the plugins, we register these component service implementations into something we call the registry. So there, if you, you can search for these implementations by name, basically ask, okay, do I have this particular service implementation? And you will get the pointer to the service implementation as a result, if it is available. You can search for a service and you can search for a particular implementation or a particular service. So it's a two-part name divided by dots. So if you just do full, then, well, you will get the default implementation of the service pool. And if you do full dot bar, you will get the bar implementation of the service pool. Right. So what are the other parts here that are depicted? The most important part is, in addition to the registry, you obviously need some way of, well, loading small components. And as you load components, they might register new services that they implement. So we have something that is called the dynamic loader, which is basically responsible for actually loading the, well, the shared object and registering the kind of initializing it and storing the service implementations that it implements into the registry. And then, well, the opposite direction, when you unload a dynamic component, it will deregister the implementations that it contains. Right. So we, the registry and the dynamic loader together, they form something that we call the minimal chassis. So the minimal chassis is basically a library, which is self-dependent, more or less. And it so happens that it is linked inside the master or server binary or the server component, as we call it now. But it could also be linked to a number of other binaries. And then they, these binaries could have their own components loaded through the minimal chassis dynamic loader, which is quite interesting and, yeah, opens a lot of possibilities, basically loading components, not only in the master or server binary, but in other bindings as well. Why not? We have one such binary to migrate key links. And it actually loads, uses the minimal chassis to load the key link components and does the migration without needing the service. Right. This is so great, but you obviously want some predictability. Basically, you need a list of components that when you start your server, they will be loaded and present for you to use. Obviously, the minimal chassis does not deal with persistence because it is what it is. It is minimal, right? And hence we have an overload of the dynamic loader, we call it the persistent loader, which is housed into the server binary now. So this is a server piece of code or service implementation that actually reads through a table, through a SQL table, and loads, instructs the dynamic loader to load all the components that are stored into that table. So this is how the MySQL dot component table gets right upstarted. And one other thing that we have here is the file scheme implementation. So basically the component is loaded by URM and it starts with the scheme. So the scheme that I have here is the file scheme. It's obviously file colon, colon, slash, slash, sorry. And it resembles the kind of the browser notation, but what it does is it actually does a deal open and then passes the result of the deal, see that it does one particular one to the loader. And the loader can actually proceed and source the component that is loaded. This means that this is extensible. You can, well, be loading components from other locations as well, but right now all we have is the file scheme implementation. Right. So this whole box on the left, including the plugins, which is very important because the plugins do reference, well, internal server symbols, so they can reference internal server symbols. This is all part of the so-called server component. So a component basically does not reference any external code. It just references, well, when it needs to access code in other containers, it would just search the registry for service implementations and call it like that. So this means that all components basically have explicit dependencies expressed into their, well, manifest basically, which makes it pretty easy to decide whether you can load a particular component or not. You just will look whether you have enough services implemented in the registry that the component needs. And then, yeah, if you do, then you can basically load it and operate. And that's why I am depicting the other components here as separate boxes on the right, really, because they are self-reliant and self-dependent aside from the service pointers that they operate with. And this is the green lines here, kind of squidgy green lines. My attempt to depict the service interface is how you, well, can operate it all basically. These are the service interfaces that one can get from the registry, because every component gets at least one service interface when it starts, and that's the registry interface. So it can query the registry and discover more interfaces as it needs to. And these green lines are the service references that you get from the registry and, well, you use basically. So obviously, we have a lot of plugins. If you know the MySTO distribution, you would know that most of our external kind of additional functionality in the server is implemented as plugins. And these plugins, they contain a lot of useful features that we are kind of gradually trying to migrate towards the components, but, well, they are what they are. They are plugins right now. And obviously, you need to transition periods basically allow plugins to interact with the component infrastructure, either to consume component services or to actually even register component services. And as you can see, we added one more plugin service here, which basically is give me the registry interface. And, well, release the registry interface, obviously. So these are the red and the blue lines around the plugins box. This basically means that, well, plugins can actually access the registry and interact with the services into it, either to register their own service implementations or to consume implementations from the registry. So this is kind of the big picture of mass care server modularization in a nutshell. Right. So some terminology I've already used most of it, but let's make it more apparent. So we'll start with the yellow box in the middle, which is the component. And I have these arrows to be relationships between terms and then well, terms are what they are. So a component contains code, obviously, and that code could be eventually service implementations. Right. So a component can register the service implementations into the registry there on the right. And it can also interact with the registry through the means of service interfaces on the left. Sorry. A component can also implement services. I mean, well, it is what it is. And these services are, as I said, abstract APIs. Right towards my head, you see some examples of services like UDF performance, system variables, stable access, et cetera, et cetera. There's a lot of services right now. We'll get through this in more details later. But yeah, we keep outing there. Now how do you load the component? You use the dynamic loader, which is on the top left to load and unload components. And I said you need some persistence implemented somehow into the mass care server and the obvious way to do that is to use the persistent dynamic loader, which encapsulates the dynamic loader. But in addition to the functionality provided by the dynamic loader to load and unload components, it also sources the SQL table mass care components to which the server writes through the install and uninstall component commands. So these are the terms that are used, hopefully, consistently throughout the implementation at the top. Right. So I guess this kind of puts us on the same page. And I want to get to the front stuff here, which is basically all the goodies that you can access through the component services. As you see, we have more than 90 related service related headers now, which is quite a lot. And you can check your include mass care component services directory for the mass care sources or well, the doxigen documentation, which I'm trying to keep up to date with all the latest on component services, API, send their documentation, more or less with examples and so on, so that you can basically just use that to be able to fully leverage the functionality provided by these APIs. So some of these component services we've already mentioned, this is the service registry, the dynamic loader, obviously. But the rest of it is also not very important, kind of very important, not less important to them. So we have interfaces to do error login to the server error log. We have interfaces to deal with register and kind of expose system and status variables from components. We have interfaces to expose user defined functions from components. You can instrument your component code using the performance schema instruments. We have services for that and headers to facilitate it all. But you can also expose new performance schema tables from components. So basically, you can expose tabular information through registering your own performance schema table, which is great. You can deal with the attributes of the security context. You can set the user account that you are going to be executing with, because obviously this is native code, so you can basically do any kind of SQL, well, aliasing. You can do password validation. You can report runtime errors if you are running inside an SQL thread. And you can also use collations. Most importantly, you can do table access. So basically, you can read and write from tables from your component, which I'm going to go into much more details later. Okay, some noteworthy additions to that. So obviously, I cannot really explain all of the 90 services, but I'll try to mention some. Most recently, we migrated the keyring plugin APIs to component APIs. Thanks to Harim Vardouderia for his excellent work on that. So now we can have the keyring backends basically places where it stores and retrieves keys from as components. And this allows us to do various interesting things, like migrate from one keyring back into another without needing the server, just with a binary containing the minimal shots. So it is very, very interesting. And there are some, as you can see here, here's some references to headers that you can check and read more about. I mentioned the system variables that you can register and unregister them. But in addition to that, you can now programmatically set new values to existing system variables, which is kind of nice. You don't have to execute the whole SQL to do that. You can just follow a little service from your plugin and it will do the needful. Again, there is a reference to the header file that describes the service. One interesting service that I also wanted to mention is access to query attributes. Query attributes are quite new in Masculine Zero client and server. Basically these are optional bits of data that you can pass along with your query, kind of like metadata or parameters or whatever you want to call it. The good part is that you can actually be accessing those from your components, but you have to be running within a server session. Obviously, you need the context to extract the data from, otherwise, well, there won't be any query attributes passed. They are transient things that only exist during the execution of a query. So you basically can have a new DF to access those. There is an example, well, a service interface defined in this header that you can read about the service code to use it. And there's also one example of that inside the Masculine Codebase. I have a UDF that actually returns query attribute values to SQL using this very search. Right, and the start of the show here, which is our table access service. The table access service, you can use that to read and write data from a table, which is great when you need to initialize your components. If you need persistent state, well, then you can just access an STL table at your component initialization time and read all the configuration that you need. And then as you go, you can even manipulate that persistent configuration through writing into it as you need to. So it is very, very handy for that very reason. Of course, if you want to do some analytics on some table or somehow aggregate the information in a table, then, well, this works really great too in a component. You can do even index lookups and scans, so basically you could specify keys that you want to use and then values for the keys that you are searching for and then you can, well, use index access like that so you don't have to do tables, scans all the time. But you can, if that's what you need to do, of course. It also does transactions. You can do like begin transaction and end transaction and roll back and all of that. Kind of interesting. It runs in its own separate transaction. And yeah, again, that's the header file that you need to access and find all the information about. It's also in the Doxigen documentation. Obviously, it's in Doxigen format, but that's the header file that you need to look for. Okay. So now to the more interesting part. I have taken the liberty of implementing a little HTTP component. Basically, it's using clip micro-HTPD. That's an open source HTTP server library, which I find quite useful. It's GNU license, so you can go download it and play with it to implement an HTTP server, obviously, in a component. So when you load your component, it will register one extra listening port. And then when you send requests to that port, it will execute them. And the data, if we fetch, at least that's how I've created this component. The data, if we use the table access service to fetch the data from a table. Okay. So some words of warning. Even though it looks quite appealing, this component, it still is lacking some very, very important things, which is authentication, authorization, and logging. And well, instrumentation of resources. So use it at your own risk, because, well, basically, circumvents all the normal, my skill be authentication and authorization processes and logging processes, but you can still access data. So it is, well, be careful when you are playing with it and try not to expose that to any malicious actors this port. All right. So let me switch to my little screen here and go on with the show. Okay. So first of all, I'm going to start my mass scale server here. Yeah, I like stopping getting an account so I can see the messages. Yep. There we go. It's now listening to 306. And I can actually verify that. As you can see, it's listening to 306. That's my designated HTTP port. As you can see, no one is currently listening to that port. So yeah, it's empty right now. No one is listening. Okay. So now let's connect here and do some checking. So yeah, I've already stole the table. So if I use the HTTP database, show tables, show create table twice. Right. There we go. We have a table which has a primary key by ID. And then it has data which is currently voucher of 2K. And I have some data into that table. Now you can see it has two roles. ID 1 is data 1 and ID 2 contains the node value as a node. Right. So let's maybe install the component. So this is how you install a component. You do install component, the file scheme that I mentioned, file, problems, last flash, and then the name of the component. Right. Now, as you can see, it's already listening to port 18080, something that it wasn't doing before. But when we installed the component, it started to listen in code. So I'll uninstall the component here just to make sure that the piece has that it's listening. And as you can see, the listener disappears. I'll add it again and check again whether it's there. Wow, it's there. Okay. So now let's do some access. And as you can see, please, you supply the ID as an argument. So basically what this HTTP component does, it takes ID as an argument and looks it up into the table and returns the data that are associated with that ID. It does so using an index. And I'll show you later. But right now I'm going to access the first ID. And as you can see, wow, I've got some data one back. If we scroll back to the table access here, you see ID one corresponds to data one. Right. If I do ID two, I'll get the now value. And if I do ID three, right now the components, I can't find it. So it basically returns me an empty reply, which is not very good, but it is what it is, right? It's just demonstration and so on. So as you can see, the thing works. I don't have a tree. But now if I actually insert into twice ID beta values three, Pismo three. I will now get Gismo three here as a result. I've already inserted it, so now three is full. But if I do four, it's again empty, right? So as you can see, this thing actually works. And it works pretty well. So now I'm going to uninstall the component. As I said, it is kind of dangerous to leave it hanging because it doesn't do any authentication and I'll stop my server here. So I can show you around the component implementation. All right. Okay. So I'll be really quick. So first of all, I am, as you can see inside my server directory under components, I have a component called HTTPD table. So here I have checked out the Git help repository that's in my slides, which contains these four files. The init SQL here is the definition of the table. So wow. I need SQL. So in SQL is this basically. It has create database HTTPD, then create the HTTPD reply stable as I said ID and then data. And then it inserts two rows into it, the one data one and two now, right? I mean, what I had there already. So this one is pretty straightforward, I guess. I'll now switch to my visual studio so I can demonstrate to you the source code. Hang on. Okay. Yeah. Okay. So share my visual studio. Right. There we go. So we'll first start, as you can see, this is the HTTP table. This is moving inside the same directory. And this is what it has. So I supply one parameter with micro HTTPD with the location of the HTTP micro HTTPD binaries. And I use that parameter to find the library and the includes files. And then I have my component here. There is a candy, see my macro for that must you a lot components. So this is the name of the component. This is the source file and then it says it's a module, meaning it's not built in which is kind of city but kind of a tradition in CMake. And I here do it test only so it doesn't get packaged in my binary, but feel free to change that. And then to create one target code component HTTPD table, and I add my include directories and my link libraries to it. So this is all that you need to basically compile the thing, right. So this is more than that. So now let's see, let's go, let's go to the meat of the implementation the HTTPD, HPD table C. Sorry for that. Okay. These are the ways. This is where the pointers to the service implementation needed by the component are going to be stored. As you can see I have quite a few of these. And there is a corresponding to require service down at the bottom in the component will get to that. So this is a handy macro. This is. Okay, so I'll keep that one and go to the component initialization. So this is the component initialization function, as you can see here it's referenced in the component declaration. And basically calls the micro HTTP this stuff being with the ADC echo callback and the initialization component the initialization I just stopped the micro HTTPD demon using the appropriate to live micro HTTP API. And as you can see I have the port here and I have one global static body able to keep the demon instance so that they can stop it right and that's all we have stuff. And I also use one transfer connection. So this is important, because it's not it's running in its own operating system track. The ADC. So now let's go to the ADC echo function. This is the lip micro HTTPD. This is kind of standard boilerplate micro HTTPD thingy. I don't, I won't go into lots of details but here it extract the ID. This is how it's fetching the ID value. And this is how it checks whether it's running from the flash because if I have done some other URL this this would have stopped it right I didn't demonstrate it but it will stop it because of that line here. And when always set them down and it finds the ID, it will go and call field table data, and then we'll create a response from the buffer return by well field in by field table data right. And then it will queue the response and destroy it and then it will send it to the client and this this also went down. This is in a nutshell the all the glue called that they need for my micro HTTPD. So now let's focus on the on our table filling function. So this one is the most interesting part of the data. So, let's start from the very beginning here it creates a table access handler. All table access goes through a handler. And in that handbook it talks one table. HTTPD replies. Right. I mean, you, you know that one right. And now it begins the transaction here. And right. What it does is it starts index access here. So this is the index server and the index access is again my candle. It, yeah, it will start with the PK calls, PK calls is a definition of the primary key calls that it's going to supply and the number of these columns that it is going to supply so it is going to supply one column value and that will be for the column ID here. That's what this means. Okay, so now it will set the column zero which is the value to the value of ID. This is the value we are searching for. And then it will do an index lookup. So it will do index lookup by one key part. And hopefully it will position the table access table access thing. They're sorry, but in the exact this pointer to, to the result. And it will then get the, the first, if we'll check the first column for no ability. So this is the data column. As you can see here's the ID column and this is the data column right and submit zero based indexing. So if it is now, it will, it is going to, sorry, if it is now it is going to just return bully and now kind of face the peting to the client right, but if it is not now, it will. So it will create a string buffer here, and it will fetch the column one which is the data into that string buffer that it has just created. And then it is going to convert that buffer to a UTFS and before string and store it inside this buff which is passed here right and into the function so it is going to transcode the mass your string buffer into into a UTF, and then it will send it to the client. This is, this is how you read the string field right. And when it sees done it will commit the transaction. And it will do the cleaner basically and the index service destroy the factory at the table access factory and then also destroy the string buffer. And this is how this kind of doing it in reverse. And this is it. This is how you implement an HDP be service in a component. And it's a lot of code very understandable. It's also on my github so if you couldn't go there and check it out. I'll now switch back to my slide so bear with me here please. Okay, so going to slide 15 bear with me please right. So, for the reading. Now that we are done with the demo and thank you for the demo to the demo God for not really for me except when I tried the I these are the some URLs for you to check. Basically, this is the URL from the GitHub for the will get her storage for my HPP component. And then there is the page on extending my skill basically this is the docs the start the oxygen page for the components infrastructure. And as usual I put links to forums, my skill dot com and box my skill dot com I, I like those places they are geared towards you the Moscow community so please be active. So, if you have any questions. Don't hesitate to contact me directly that this, this is my email here so right away. And as usual, thank you for using my skill. It's fun writing it and I hope it is also that much fun for you using it and extending it so thank you again for your interest and hope to see you on some other talks. Now, if you have questions that will be around for questions please ask away and thank you again. Thank you.
We will explore the latest component services offered by the server. And then we will check how one can use the table access service to make a HTTP server component that allows access to table data via HTTP without a middle-man. The MySQL development team is constantly adding new and very useful component services. These can be leveraged by creative component authors to produce useful MySQL server add-ons. Check the latest additions to the service list and see some of them in action through a HTTP server component that serves table data.
10.5446/56946 (DOI)
Hello, my name is Valery Kravchuk, I'm a principal support engineer working for MariaDB Corporation and today I'm going to speak about flame graphs. Originally the idea of flame graphs was born in context of dealing with profiles created for complex software. Sampling is basically measuring frequency or duration or anything related to function calls and the problem with profiling such a complex software as MySQL server is the amount of information that is collected, even by sampling profilers like Perf itself or any other profile. These huge data sets have to be preprocessed or analyzed somehow. If profiler collects the information you are interested in as a text, you can use text processing tools, but it can be a complex task as shown by Percona tool called PTPMP, 4-man profiler, that tries to summarize backtraces sampled by GDB divided by Linux. It's more than 100 lines of code to just to collapse the information and represent a summary that is more readable for human beings. The problem is human beings are not great in understanding a lot of text, especially not structured text. So we need structure or even better we need pictures. We have to visualize the information and there we can easily spot the hot points, the problems and this is where the idea of flame graphs originates. So this is a tool set created originally by Brandon Gregg for Unix systems based on profilers output from Unix systems and tracing tools from Unix systems. But the problem we have to deal with using flame graphs is similar in all other environments, no matter how the profiling information is collected, we still should better represent it graphically and flame graphs are a great way to represent this information. As a side note, whenever my slides that are already shared, you see an underlined text, a link and you can follow it and get a lot more details. So to show the problem you're alive, let's apply a sampling profile curve on Linux to my scale server running under sysbench load. And if we collect samples 99 times per second for 30 seconds, we end up with more than a megabyte of binary data. We can interpret, represent this data as text and if we do it with a usual parscript command, we will see outputs like that. So for each sample, we just check what instruction pointer is pointing to on every core. We see that specific process in our case myScaleD with this specific ID had something running on CPU core with this ID. That many whatever seconds from the beginning of sampling and this tech trace that we were able to follow is like that. So addresses, function names, function arguments, things like that. It's a huge amount of text. Perf itself has a way to summarize this text as a tree representing the share of samples as we collected out with myMindG option we were collecting stack traces. A share of how many times in the total number of samples this specific call at this level was represented. So we see that for that specific case, we were mostly running prepared statements and there we were mostly running selects like in 19% of all samples selected was running. So I've specifically presented this as a very small font size to fit at least some very small share of information on screen. Thing is, even these three like very visual kind of information for complex software like MyScale takes almost as much space as the original binary data collected over 30 seconds. So they are really hard to deal with and really hard to understand. Here comes flame graphs. So if we are able to represent each and every function call as a box and the size of the box represents a share of these books in total samples collected, then it's clearly visible where most of the time is spent. So this is a flame graph. It's also called flame because by default it uses warm code colors, but the meaning of the color is not something you should concentrate on. In this you see call stacks here vertically and you see function calls summarized horizontally. Here the samples were collected by BPF trace utility from Percona server, but it doesn't matter. So you can clearly understand that we can fit a lot of information in such a graphical representation. And this flame graph by design is an SVG file. It's interactive. So you can search here, you can zoom in, zoom out, and it's much easier than scroll over huge files of text. So the implementation of flame graphs and the idea comes from Brandon Gregg who worked and still work for Netflix, famous performance expert. There is a set of three tools called flame graphs. It's a GitHub project. And the basic idea of how to interpret them is represented here. So on X-ACES, we show the stack profile population. Each line is sorted alphabetically. So it's not like it's sorted by time. The sequence horizontally represents nothing, but the width of each box represents a lot. It's a number of samples in total, number of samples reviewed, or it's a share of this specific box for the metrics collected in the total number of entries for this specific metrics. Y-ACES vertically, we show call stacks. We can equally well show any hierarchical structure. So the higher the, deeper the hierarchies, the higher the flame graph is, you will see flat frame graphs, very nice spikes on flame graphs and things like that. Flame graphs can be, if they are created based on these tools and based on profilers, they can be classified differently depending on what we measure. If we measure time spent running on CPU, we will get CPU flame graphs. If we measure time spent waiting on something, we will get off CPU flame graphs. If we measure another resource, for example, memory used per specific function call, memory allocated in this place in this stack trace, we will get memory flame graphs. You can actually build flame graphs based on many kinds of output. There are different kinds. We can build it on PTPMP. But basically the original idea is that you have this project, flame graph software there. You have some profiler like Perf from Linux or maybe more advanced BCC tools on Linux or maybe more advanced tools on other operating system. You have to collect the information in the format useful by these tools and then you get that nice visual representation. The flame graph tool itself is quite simple. It's a parallel program. It has help option and help option produces a lot of output on how to use it. From all the options I highlighted, those I use most often. It's titled to give a title to your flame graph. By default count is samples, number of samples in stack traces. If you are measuring time, then you need to say so. If you are measuring memory allocated in bytes, you have to say so. Different types of frame graphs may be visually differentiated by the color palette you use. We have hot or CPU palette. We have cold or blueish or waiting palette or IO. We have memory palette that is actually green and several others. There are many other options as well. But I'm not going to go that way. What I'm going to explain is what is the expected input of the flame graph PL2. The expected input is like that. You have a so-called collapsed stack trace. It's a sequence of names in that hierarchy you are representing separated by semicolons. Then after a space, you have your measure. What you measured, what you collected over this path. It can be number of samples. It can be number of bytes allocated. It can be time span. It should be an integer number. So this format can be abused by different tools and the idea that might be wondering how much we can abuse this was originating from this block post byte tunnel where he used flame graphs to represent Oracle scale execution plans with time spent on each stack. So you can show almost everything here with a flame graph. Besides the flame graph PL itself, as the tools are originally designed to deal with profiling, there are several tools to produce this kind of formatted output from raw information. You can get some profiles. So there is a stack collapse for Perth. There is stack collapse for the PF trace. There is stack collapse for GDB backtraces. For this specific talk, I also highlighted defaulted to that basically is a way to compare two profiles and show them visually. We will speak about it a bit later. To find out how this works and what do they expect, if there are any options or anything, you should read this source code. So I have to show different types of flame graphs probably. So I will start with classical one CPU flame graph and warmish colors where color does not actually matter anything. It's not like if it's deeply red, it's necessarily taken a lot of time. So the colors are used in such a way that neighbor boxes of different colors can be visually distinguished. But it's a warm hot palette. So the classical way to create CPU flame graph for MySQL would be putting some load like Cisbench OTP Read-Write, recording everything for the entire system, or you can record just samples for the MySQL D process with stack traces minus G for 99 seconds for specific period of time. So then you apply per script as I had shown and per script output is expected input for stack collapse PL. So you have folded stacks. You can further filter this with text processing tools if you prefer to concentrate on specific part of it. And then you just apply flame graph PL, set some options here, the file that you might get. Even though Brandon Greg created a lot of useful tools for stack collapse and you may still have to create your own. In this blog post last year, probably I tried to solve a specific problem, find where most of the time is spent waiting for mutixes, in that case for MariaDB. And for the task I had used another great tool called BPF Trace that I speak about elsewhere a lot. I wanted to clean up the output of BPF Trace. I do not want to see any hexadecumical addresses. I wanted to skip the arguments. So the raw trace that I've got from BPF Trace, it was not a sampling profile, it was a tracing profile in that specific case. I have preprocessed with quite complex AWK script. It does not matter specifically. Just know that you may have to do that. And then my preprocessing was a bit wrong. I've put the metric time spent waiting first and the collapse stack trace second. So I have to exchange the order of columns. I fed up the Flamegraph PL with this input. I've used title, I've used count name. And here is the result. So in that specific case applied to MariaDB, let me repeat it. Most of my work is related to MariaDB, not to my scale day to day. So that specific performance problem was related to inserts becoming slow and it was clearly highlighted where the time was spent. Most time was spent on specific function doing insert. The problem was later fixed. But to make it clear, Flamegraph was really useful. Another type of Flamegraph, another color palette, so-called off CPU. It's not so easy to build off CPU time because sampling profilers do not work really well for that, like perfects, useless. But there are other tools in BCC, BPF Trace compiler collection tools. I've used off CPU time that took into account all possible states for the thread that when it's not executing actively on CPU. It may be waiting for IO, it may be waiting for mutics or whatever. So all this is taken into account and off CPU time already produced collapsed stacks. So it was, some of them were lost, but we were collecting for 60 seconds. So for some specific load on my scale D, on my scale server, I've spent like 53% of my time just waiting on mutics because it was highly concurrent for my specific host with just two cores as far as I remember. So that's it. It's easy. These are Linux system calls. So that's what kernel does, not a user-length software analysis, what our my scale D was waiting at the moment. Another idea that many developers would like and many DBAs already would appreciate is the fact that you can build flame graph based on PTPMP. This was first published by Prokona in this blog post. It's natural. It's there to lend their way to summarize it. So while PTPMP is useful to show the top stack traces, including off CPU. So it doesn't distinguish because it's just backtrace of every thread, no matter is it on CPU or not. So we will see system calls here as well. So it's quite readable by itself, but summarized in flame graph, it's easier to understand. So here I've highlighted IO, for example, noted that IO account for 31% of 21% of everything. So the problem with PTPMP for flame graph is different order of metrics and first lines. This is easy to solve a simple AWK. And another thing I had to add is this reverse option. So I want to end up with my scale D slaved threads, slaved threads start at the bottom. So I had to reinterpret stacks differently. What you can see here also these profile was for my scale 8.0.27 where a lot of C++ is used that even PTPMP and advanced collapsing techniques used there may produce empty lines for some parts of the stack trace. So this is the problem of C++ intensive applications and standard libraries. Yet another example is memory flamable. It's kind of greenish by palette. If you use color memory, it will be shades of green. The idea of this blog post and this investigation that I made was how to efficiently try to measure memory allocations per function call. So most of the memory allocations come from not outstanding, but just where it's allocated. Freedom note, it's another topic. There are different memory allocations, routines, malloc, calloc, realloc. And all of them are taken into account by these kind of old and a bit hackish malloc stacks tools, sorry, from all the BPS tools again by Brandon Gregg. So I've used it. It was the most efficient and easy to apply way. It's already produces collapsed stacks. So I just apply frame graphs to what was collected and they see where memory allocation come from. Do comment and specifically where is there. So file sort, for example, took 14% as far as I can see. That's what I heard on while making this screenshot. So you probably know it's a word abused. These on previous graphs, the idea of this talk was to add something new on top of what was already presented. So I tried to abuse the tools for building flame graph to represent another hierarchical structure. One of them is weights from performance schema. If you remember, each weight has this kind of structure. It's weight on sync primitives on ethics, in a DB, and this is a specific latch we're waiting on. And we spend that many picoseconds of it. So if you ever check this output, it's clear that this hierarchy can be somehow summarized. And that's what I did. It's not hard to do actually, as soon as you collect the weights, you can preprocess them with AWK. There is some tricks with changing the separate separator. Comparising them is also a kind of a problem, the solution is discussed in the blog post. It's pure scale-based, produces weights, and then these weights are somewhat preprocessed. And here we can see that idle weight was 12% and some other weights like, highlighted here were more or less the same. We can see the impact of each and every weight, IO and whatever. So the flame graph is so-called icelink, so here we have inverted, also applied, I would like for these weights to go down so we can see larger and bigger categories on top. It's a bit easier. This kind of flame graph is not very high because the number of levels in this hierarchy is quite small, six, maybe seven, at most, probably six. That's what performance schema gives us. But it still can be represented as a flame graph and it's useful representation. You can replace and get the same information with several selects like that, but here you can just get it all over the SVG file. I also used flame graphs based on performance schema weights to show another technique of visualization, some difference represented by defaulted script by Brandon Gray. It allows you to compare two collapse stacks, two flame graphs basically. The shape is the same as in the second sample, but the coloring here finally matters. So it's so called also red-blue frame graph. So for each box, if the second sample has much more, much bigger value of the related matrix, the box will be red, deeply red. If it has less, it will be blue and much less deeply blue. If the matrix for this specific box is basically the same, it's white. So you should pay attention to red and you will see where most of the additional time is spent on. And it was spent on InnoDB IO file when we switched InnoDB FlashLock at transaction commit from zero to one. That's it. The original abuse idea by Tano Poder was so interesting for me that I also tried to create flame graphs based on query execution plans for MySQL. It became possible for MySQL since version 8.0.18, where explain analyze was added with a default and only for now three for month. So we have actual metrics collected over query execution, real number of rows, real number of loops when, how many times these steps was executed and real time spent to return the first row and all the rows. So I highlighted the items I'm interested in. And these three representation is a tree because the offset from the beginning. Each four spaces at the beginning represents a next level in the tree. So it took me quite a lot of efforts to first preprocess this with AWK to build the proper collapsed tree structures. And then this was loaded into the database table and with some commentable expressions, recursive commentable expressions, so quite a modern SQL. I was able to get the output that is actually looking acceptable for a flame graph. The details are all here. And here we can see where most of the time was spent while executing the query. Basically, it's the same flame graph. It's also a nice thing, but it's in hot colors. That was my choice. It would be much easier to do the same if the information about time spent on each step in the plan was in the table like in Oracle. And I've actually asked, created a feature request for my SQL and it was verified asking to do the same. It would be also useful to get it in JSON format because it would simplify processing with some existing libraries to deal with JSON. Like it happens in the podgreSQL where this syntax originates from or like it happens in MariaDB. These details aside, I have good news for you. There is already an open source tool called nyscale query profiler that creates flame graphs without all that AWK tricks and common table expressions, at least not visible. So it's interactive tool created in Node.js. You just start it. There is nothing actually to build. As soon as there are all preconditions are met, you have this nice blue. You can run any query. You can get flame graph for it and you can pull it over. So just check it. It's cool. That's all actually for me. There are some links to previous talks, useful resources, bug reports where flame graphs are used. I should just highlight that flame graphs are great tool for representing any hierarchical structures where specific metrics is associated with each level of the hierarchy. Thank you for your attention. Now I'm ready to your questions. Bye. Bye.
Flame graph is way to visualize profiling data that allows the most frequent code paths to be identified quickly and accurately. They can be generated using Brendan Gregg's open source programs on github.com/brendangregg/FlameGraph, which create interactive SVG files to be checked in browser. The source of profiling data does not really matter - it can be perf profiler, bpftrace, Performance Schema, EXPLAIN output or any other source that allows to convert the data into the expected format of comma-separated "path" plus metric per line. Different types of Flame Graphs (CPU, Off-CPU, Memory, Differential etc) are presented. Various tools and approaches to collect profile information of different aspects of MySQL server internal working are presented Several real-life use cases where Flame Graphs helped to understand and solve the problem are discussed.
10.5446/56947 (DOI)
Hi, my name is Eisling Gröven and I'm working on MySQL at Oracle. And today I will talk about hash join in MySQL 8.0. First I will give an overview of what hash join is before I go into details of the hash join implementation in MySQL 8.0. Then I will discuss when it's beneficial to use hash join and what you need to do to take advantage of these benefits. So first an introduction into hash join. So hash join is a method for performing the join of two tables. What I will describe here is often called classic hash join. Here we have an example where we join the two tables Auras and Customers to find information about Auras with a total price above half a million. Hash join has two phases, the build phase and the probe phase. In the build phase we find the rows from the Auras table that qualifies. We call this the build input. We put these rows into a hash table by applying a hash function to the join key. In the probe phase we read from the customer table and apply the same hash function to its join key to find matches in the hash table. Any matches are added to the result. So one big advantage of hash join is that we only need to read each input once. Note that the hash table must fit in memory. So when we normally choose the smallest input as the build input. If the build input does not fit into memory it is possible to split it and do multiple scans over the probe input but as we will see there are more efficient ways. Hybrid hash join is an alternative to handle build input that is too large for the allocated memory. The build phase starts as earlier described by building a hash table from the build input. However, when the hash table is full the rest of the build input is stored in a set of chunk files. A different hash function will be used to determine which chunk file is always added to. In the probe phase in addition to checking for matches in the hash table all rows of the probe input will be added to another set of chunk files using the same hash function as was used when populating the build chunk files. After the probe input has been processed one repeat the process for each pair of chunk files. That is first a hash table is built from build input from chunk file number one, number zero and probe with the content of the probe input chunk file number zero. Then the same is done for files number one and two and so on all the files have been completed. So MySQL has supported hash joins as MySQL 8.0.18. In earlier versions of MySQL the only way to execute joins were so called nested loop joins. In nested loop joins we will for each row of the left input find all rows matching the rows in the right table before continuing with the same process for the next row of the left input. This can be very efficient when one can use an index to find matches. One will not need to read all the rows of the table and there is relatively few matches per row. Nested loop joins is also not that well suited if IO is needed to access the rows of the right table since that will generate a lot of random IO. In MySQL 8.0 both nested loop joins and hash join is supported and both classic in-memory hash join and hybrid hash join has been implemented. How much memory we can use for the hash table is determined by the join buffer size variable. And for hybrid hash join the max number of chunk files is 128. This means that a build chunk file may become larger than what the hash table can support. With that it is the case one will have to load the hash table in multiple steps for each build chunk file and one will have to scan each probe chunk file multiple times. Note that hash join will usually only be selected automatically when no indexes are available for nested loop join. This means that in some cases in order to force hash join to be used we need to load it to disable indexes for example by using the no index hint. Then there are no indexes that can be used for nested loop join, hash join is much faster. Here we show what happens if you do not create an index at all for the queries of the TPCH benchmark. Many of the queries will then be more than 1000 times faster with hash join than the nested loop join. However, hash join can also be faster than indexed nested loop join. For this simple query where we vary the size of the build input we see that if more than 30% of the queries of the rows from the order table is selected hash join will be faster than nested loop join. Note also that the performance of hash join in MySQL has been improved since it was first introduced. As we can see from this diagram a new hash table implementation has improved both memory usage and efficiency of hash join. So when should we use hash join? As already mentioned if indexes are not available MySQL 8.0 will automatically use hash join instead of nested loop join. In this case we have a query where we want to find those that are registered as both supplier and customer and we will do that by comparing phone numbers. And since the phone number columns are not indexed we see that from explain that hash join is used in MySQL 8.0. And the performance difference is enormous. While this query takes over 2.5 hours in MySQL 5.7 it can be done in 1.5 seconds in MySQL 8. Hash join is also very different efficient than queries are Iobond. So these curves I have shown earlier it shows the performance than the database before is big enough to hold all the data. If you have a much smaller database buffer so that IO is needed things are quite different. As you can see hash join takes only a constant factor longer when storage is involved. But for nested loop join things are quite different. The main reason is that the nested loop join you may have to read the same page multiple times from storage since the access pattern is random. While for hash join you save a lot by only doing sequential access. So in this case hash join was faster when more than 1.5% of the orders table is accessed. Query 13 from TPCH benchmark is a case that shows that if we access large part of a table hash join is better. So in this case we will access the entire customer table. There is no conditions on columns in the customer. And we need to make this the building but because it is the left side of a left outer join and when you have left outer join you cannot switch the order of the two tables involved. And we see that if you use hash join we will use 45% less time than with nested loop join. Another benefit of hash join is that the filtering may be done before the join also for the program input. For nested loop join you will access the right hand table before you evaluate any conditions on rows of this table. For hash join you can do this filtering before you check any matches. And one example to illustrate this is query 5 of TPCH. In this case we have two conditions on two tables, region and orders. For the join order chosen by the optimizer we can apply the conditions on region before the join since it is on the left hand of a join, the first join. However since order is a right hand up-went the conditions on order date will be applied after the join for nested loop join. But if you switch to use hash join instead we can evaluate the conditions earlier. And as we see if you do that for this query the execution time is reduced from 14 to 8 seconds. So what can you do to benefit from the improvements enabled by hash join? First in order to check whether hash join is used you should check explain. Traditional explain will show using join buffer hash join on the line of the probe input. And the new three explain you get even more details. Here you can see that the hash join operation has two operands. The top one is the probe input while the bottom one contains the build input. As already mentioned if the hash join is not automatically chosen one can make it be used by disabling the use of indexes for nested loop join. For query 13 that we looked at earlier we see that there is one possible index IO cast key. So we can use an index no index hint to disable this index and now we see that hash join will be used. For the other example from TPCH query 5 we wanted to replace the hash join involved with the orders table so that the filtering on order date could be applied earlier. Since there is in this case are multiple tables involved things become a bit more complex in order to avoid that the optimizer change the join order of the first tables when we disable indexes on orders we will use the join prefix hint to make sure that the join order for the first tables do not change. So from explain we can see that the build and the probe inputs and that the filtering is now applied to the orders table before the join condition is evaluated. As you see from this example the forcing hash join can become a big compass and they are working on solutions to support a cost based selection of hash join in for these cases. The size of the join buffer will impact the performance of hash join. So in this diagram we show the performance for our query for different sizes of the join buffer. If the join buffer is 1 gigabyte we can do in memory hash join for all sizes of the build input. If the join buffer is 256 megabyte we will need to use hybrid hash join when more than 40 percent of the rows are selected. With a 16 megabyte hybrid hash join we see that hybrid hash join is used in all cases. If the join buffer is only 4 megabyte each of the 128 build chunk files will not fit in the join buffer if more than 90 percent of the rows are selected. So in that case there will be an extra overshad since each probe chunk file will have to be read twice. However, I think the main message from this diagram is that the overhead of hybrid hash join is not that very high and so that a big join buffer is not necessary to give good performance of hash join. So how can we check if our join buffer is big enough for in memory hash join to be used? We can check the performance schema table that have information about memory allocation. It has a special event for hash join and if the count alloc and the is equal to high count use for hash join we know that a single hash table was used which means that it was able to do the hash join entirely in memory. Otherwise hybrid hash join has been used. Here you have allocated memory for multiple hash tables. In order to be able to estimate how many chunk files are needed MySQL will try to estimate the size of the build input. Without histograms this estimate will often be less accurate especially when we disable indexes since then it cannot use information from the indexes to do the estimation. So this diagram shows what happens to our query if I remove the histogram on Ocastkey. We see that the performance becomes much less predictable without histograms. So to sum up there are many cases where hash join will improve the performance. If indexes are not available there can be an enormous benefit but hash join is often better choice when queries are eye-abound or when large part of a table will be accessed or when there are selective conditions on multiple of the involved tables. I have shown how you can use no index hint to force hash join to be used and increasing the join buffer size may often improve the performance of hash join. And finally remember to create histograms.
Hash join was introduced in MySQL 8.0.18 and was presented for the first time at FOSDEM 2020. Since then, the performance of hash join has been improved, and I will present results that show this. We will also discuss what kind of queries benefit from hash join, what you need to do for hash join to be used, and how to tune your system for optimal hash join performance.
10.5446/56954 (DOI)
Good morning and thank you to attend this presentation on Percona XFREDB cluster non-blocking operation. My name is Marco Tusa and I am a technical lead in Percona for a few times now and principal architect working on mass-quadra area for many years and in the development and you know database environment for too long I mean really I'm quite 35 36 years I know it's a long long time. I'm also an open source developer and contributor and few you can visit my github and you know I developed few tools that you can of course utilize and then be and provide feedback I would always appreciate that. Now while we are here to talk about NBO and what what we want to cover. In mass-quadra you have operations that are defined as online Ddl and those operations allow the the brides or other operations to work concurrently. That of course is to reduce the locking and to reduce the impact of changing sorry of changing table structure or you know add in an index something like that. However in a solution Galera base like pc when you raise an online Ddl that online thing is not actually respected and the whole cluster will put itself on hold in the center for bright so for data modification because the lock that will acquire will impact the the world cluster. So what we are going to do today is to cover very briefly what this is and what NBO brings new to the to the scenario and a very fast comparison with group replication okay. So in terms of online online Ddl we said that mass-quad standard has many of them in fact you can do index operation primary key operation column generation column operation general column operation a lot and many many many many things but the less impacting and that are actually the more online let's say in this case is our operation like index operation and the column operation like dropping a column creating renaming a column reordering you know this kind of things so setting at the fault and the foreign key operation which is create a foreign key drop a foreign key or the table space like change a name or disable enable and disable encryption right. In if you operate one of this I mean if you execute one of this I will focus today I will focus today on only on the index because it's what we are doing with NBO but if you raise any one of this against a PXE you will get total lock right the cluster will stop and in PXE what in order to operate Ddl we have three ways one is the total order resolution abbreviated in toy which is mainly a way that is used to execute operation on all the cluster at the same time the rolling scheme upgrade the RSU that instead is executed one node at a time and finally we have a PT online schema change that is a script additional script written by Percona that is allow you to modify tables online with a trick we but we will see each one of this very briefly in the next slide if you have a cluster a PXE cluster that is a totally complex solution the concept of primary and secondary does not really apply in PXE normally all nodes are primaries which means that you can write on any node and that node will communicate to the other the changes and we'll agree with the other nodes about if the change is committable or not the point here is that if you do so in a very busy cluster the certification phase this the phase that is in charge to guarantee the data consistency will be would become overwhelming it could be very heavy to you know certify thousands of query per second and all changes per second so it could be something that the cluster will start to suffer and the performance of the cluster will start to decrease and in any case you are not scaling writes you know writing on the other nodes because the capacity of write is the one that you have on a single node because in any case the other nodes need to recognize the execution and approve it anyhow so keep in mind that I'm referring here to as primary as a abstraction a logical abstraction there is no primary settings that say here that PXE have only one node dedicated to write when I use toy and I start to execute a DDL I execute on one node it is automatically distributed on all node and all nodes are put on a whole or all the writes are put on hold on the whole cluster the good thing here is that the cluster will always be consistent so whatever happens whatever write you do whether the changes you do the cluster will be perfectly consistent and doesn't matter you know if one node goes up or down because the other will reflect what the status is of the of the other so they are perfectly consistent with RSU if I have a primary and I want to start with it it is advisable to move the writes to another node okay or you can start from a secondary and then you know do the secondary first but let's say that I have my primary and I I want to start from that specific node well the advice is move the writes to another node such that you can declare that secondary execute your DDL there and as soon as you execute the DDL there will be a metalock and that that node will be locket but the point here is that action is isolated on only that node and the cluster point is not is not consistent so you have to replicate the action three times so if you have an alter that takes 10 hours you have to do first on one node then on the other node then on the other node so instead 10 hours you will have an alter that is taking 30 hours to be completed on the whole cluster of course less impact but you know there is the risk of having the cluster disaligned if you use instead PT online schema change when you use PT online schema change which is as I said an a stern script written by Percona what the the PT online schema change does is first of all read the table and then it create an empty copy of this table so you have production table and an empty copy of the other of the production table and it starts to migrate the data by Chang from one table to another there are triggers to keep the table updated I mean several things but make it simple right it data is moved from not moved sorry copy over from the regional table to the new table when everything is done there is a moment where the there will be a lock very short one and the old table the production table will be renamed old table and the new table will be the will become the production table now the the good thing is that of course locking is limited but the bad thing is that you have data duplication there's additional space and you know it takes time of course to copy everything over and other things like in any case you are moving data the server is busy also copying the data there are good and bad factors all the time here now what about nbo when nbo has been first of all is still a technical preview is a feature that we are developing that we have developed in Percona just recently we cover only the alter index create index well alter to alter index create alter table to create index create index and drop index so the index part let's say is is covered just because it's the most common operation and it's also the one that we are trying to make it less impacted if you try to do a ddl with nbo and and that is not the one mentioned you will get an error let's say error not supported yet yet right keep in mind that then I will do I will show you a simple test that you can also replicate on your environment but it's a simple test to show what nbo does and why it behaves in a specific way so what we will do is to create to have a table created with sysbench incorporated with sysbench and then we will perform a ddl add in an index on this table while on a different connection we will try to insert data on the table that has been altered another connection will insert data on a different table and then I put the comment for you if you want to use it to test what is going on on the additional node in the cluster as I said this is a simple table coming from sysbench you can find that actually is not the sysbench but it's the one you can find in my github it's a little bit more articulated at this table but nothing special here and we I put just five million rows you know just to give a little bit of of load otherwise it will take two to I mean milliseconds to do the alter okay so the test is to one when we raise the alter table and sorry the the comments are this one connection with the alter table and the other comments is are for you know inserting the data on the additional table and before doing the inserts what you have to do is to define what kind of method you want to use do you want to use toy you want to use mbio you want to use rsu now in in this test we will do we'll collect the data it was collecting the data just using toy and mbio okay to do so you just raise a session set session variable and you set on the node that you're going to where you're going to execute the alter set session ws rep also method toy or mbio and then you raise the you know you raise the alter table okay what happened is quite interesting if you use toy which is on the left and mbio is on the right if you use toy you see that the alter will take more or less 64 seconds and what will happen is that the table that is doing the alter and any other table will be stopped for the whole time well with mbio we will get at the moment we will get a little bit more time in execute during the execution but the other table will be able to insert and we'll have a just 25 seconds of lock instead 64 so one third and actually this is also something that would be less yeah it would be less but you know it's the lock the lock will be just a few seconds if you compare you can see the difference right in the between toy and mbio but why we still have lock in mbio well first of all let's clarify toy is taking a metal lock at the beginning is taking the metal lock for the whole time and it is released only when the operation is completed which means that during the time no operation is allowed mbio raised the metal at the beginning and another metal lock for a very brief brief time and then another metal is taken at the end during the time of the operation each node is actually able to work independently and other operations can be executed on the other tables of course you cannot execute insert on the on the table that you are modifying but all the other tables and in the cluster are actually free to operate the metal look at the end is to allow the nodes to synchronize the moment of the commit the final moment of the say okay yeah everything is good everything is done let's do it and commit and finalize the operation this is done with a commit and there is also the protocol to use error voting cluster error voting to be sure that our nodes agree this phase is the one that takes a cost a little bit more because the performance of each node may vary a little bit right so the better the nodes execute the faster they finish the operation and you know the the last time you will get the you will suffer of luck on the other on the other table and and the 24 seconds will reduce okay the other very very very important thing is that with mbo you can continue to use the you can continue to insert other table you can alter another table using mbo as well and this is not possible with standard toy or rsu well on the node and this is very very important if one if the node that is a one of the node is executing the outer is crashes the other two nodes independently will go ahead and will you know convene at at the end and they will agree if the operation is executed successful or not anyhow so you all also if one node crashed the other two will say yeah fine cool everything is working fine i am happy let's do it and i will apply the node crash when if you restart it will resynchronize with the two so the operation let's say that you have done an alter for 10 hours for one day and the node crashes at the last hour right so the other crash okay the other two will complete and your operation will be safe which is something very very cool now the last part okay just a brief comparison between apple and bananas so what is uh comparing to group replication group replication has different method of consistent method and this works only when you use a event tool right but okay group replication respect the online ddl when you execute and which means that this we should expect to have online ddl almost on on the on the node and without without you know impact on the other table but how the online ddl well how the ddl is actually applied on a group replication class what's happened is that you have a primary and in group replication you actually declare a primary you have a primary and then you change the your preferred operation of adding an index on the primary node when the node had a complete the execution then the operation is transmitted to the secondaries and to flush it to the bin log for eventual asynchronous replication when all this is completed then you have the full cluster realign again okay so you have one node only then the is a duplicate the operation is disseminated to the other nodes and and bin log and only when after that you have you know when the they complete you have the cluster realign so if you have an alter it is 10 hours you have first 10 hours on the primary then you have after 10 hours on the secondary and after 20 hours you may have the cluster realign again but during that this time the impact of the on the cluster itself is minimal in fact we can see that the locking operation i mean that is no locking almost no locking there is a performance decrease that is minimally you know talking but you know of course depends on the traffic you have but i said that there is no real locking however and this is the part that i was mentioning that is very important to keep in mind however the point is that pc has kept the cluster fully aligned with nbo less operation less less impact a little bit of cost less impact but the cluster was always aligned okay nodes one each node were perfectly aligned one with the other with group replication the input is zero or close to zero but you have a consistency issue here it like rsu so you know it depends as you know it depends on you it depends what you want it depends what you expect right for me that's a couple cluster you know having this method that do that by faces i don't really like too much but hey it could be okay final conclusion i recently we have a discussion in in precona because we call it nbo non-blocking operation but actually there is a blocking operation also if it's minimal so there's been raised this why we don't mention it we call it less blocking operation okay we are still working on on the feature so yeah maybe we will call it less blocking operation which is probably more appropriate nbo is helpful and for standard operation like creation of an index or change of an index and it's very helpful because instead having to create you know doing performing the operation with PT online schema change or doing the RSU to desynchronize the nodes you can do it online with minimal impact and as I said the impact is related to the node capacity and not really only on the amount of data so it is really how the node is performing so as more costs exist but it is really solving another significant issue and it's consistent all the time this is a technology preview feature please don't trust in production don't put in production okay just test it let us know and let us know because the more can the more will come if we see that this is a is a good thing to do and is working as as issued with your help with your feedback we will try to do the best and implement additional feature that from other online operation that you know as I indicate are the less impacting the less the more online from the the the first slides that I was showing you and this because it's fine a comment you know perkona is focused on developing a solution for the casserole and other distributor has nbo but you need to buy enterprise version you know it's not free it's not available for everyone we believe in open source we believe in what we do is for the benefit of the community and we actually are always operating in this way and we need your help of course but at the same time we want you to understand that what we are doing is to make our life and your life better when using mask well okay this is a few reference for you if you want thank you very much for attending this presentation and please raise your any any question you have and buy slack channel or use chats or whatever and thank you very much
Performing simple DDL operations as ADD/DROP INDEX in a tightly connected cluster as PXC, can become a nightmare. Metalock will prevent Data modifications for long period of time and to bypass this, we need to become creative, like using Rolling schema upgrade or Percona online-schema-change. With NBO, we will be able to avoid such craziness at least for a simple operation like adding an index. In this brief talk I will illustrate what you should do to see the negative effect of NON using NBO, as well what you should do to use it correctly and what to expect out of it.
10.5446/56956 (DOI)
Hi everyone, my name is Sunkur Anganath and I'm here with my colleague, Mithika Ganguly. Welcome to the topic challenges and opportunities and performance benchmarking of service mesh for the edge. And our team has put in a lot of work in order to share some of the details we're going to present today. Let's look at the agenda. So we're going to start off with looking at requirements of what edge deployments look like versus attributes the service mesh provides challenges at each of the layers of the networking stack. And I'm going to share some of the experiments we have done along with the benchmarking results and the deep dive micro architecture analysis for these experiments. And you're going to end with a summary and a call to action. When we whenever we say the edge deployments like our edge platforms edge native applications, there's a lot of confusion today with respect to what edges or what edge native applications means. A simplest way to understand edges of types of edges is looking at from the end devices perspective. And the round trip latency it needed to reach these devices. So if you go with that location, there's on from edge, which services the different IOT protocols like transportation, healthcare, et cetera. And then you have your network edge, for example, access edge, which is your wireless access radio access network, et cetera, or near edge, my universal customer presence equipment, and then rational data centers, et cetera, where the latency is here here between 10 to 40 milliseconds. So if you look at the edge platforms, we are putting in a lot of effort to provide a simplistic unified platform across these deployments. And when you look at the stack, the application stack and the choir to service these edges, we see some of the challenges that exist today in the market, samples, respect to mobility and federations across these Mac domains, resource awareness and ability for application to run with optimal performance for low latency and scalability across these edge native applications and we try and map this to microservices concept. So service mesh is one of the important factor in industry today. So if you try and understand what a service mesh is essentially, if many applications or services are running in a data center environment. So you have a sidecar proxy attached or servicing each of these applications. So that is essentially offloading a lot of data plane functions. And when you scale it across your deployment, so there's each of the cycle proxy is attached to an application. And these cycle proxies are in turn control and managed by a control plane with a set of APIs where you can configure these proxies at runtime. So this essentially forms your service mesh with control plane and the data plane. And we look at how a service mesh can map to some of the edge native application requirements, whether these help each other or go hand in hand. So let's look at some of the attributes that we can correlate both. And one of the first one is awareness and discovery. For native application, for example, you have a VR application running in a mobile that's traveling across different locations in a car. So the backend infrastructure need to be able to discover the register device, be able to service the device across different locations and maintaining the quality of service across the path. So the backend infrastructure, if any look at the service mesh, like it provides the cycle proxy, we will discover these services across clusters and able to tune the applications of network traffic based on QoS or location and traffic requirements, et cetera. So a lot of this functionality can be handled by an intelligent cycle proxy. Next one is resiliency. A lot of edge deployments are often deployed out there outside a data center and across the roadside or customer premise side. So the application and infrastructure need to be able to self heal and be resilient across the restarts or temperature conditions or road conditions or failure situations, et cetera. The service mesh provides you with some of these aspects inbuilt with respect to health checks, traffic lead out, circuit breaking functionality. A lot of this is taken care by the control plane at runtime. And you can set your KPIs for that. Next one is scalability. Ability to scale and address load conditions on demand based on the traffic at the edge. And service mesh can threshold the traffic surges or reroute the traffic to application parts that can handle the increase in these requests. Next one is low latency offloads. For edge native applications to satisfy an SLA, you need to be able to leverage the resources of the hardware, whether it's offloading to a GPU or a smart neck, et cetera, to meet their QoS requirements. There's a lot of work on going for service meshes to be able to leverage these hardware offloads. For example, utilize a smart neck or utilize hardware acceleration for lower round trip time. Then finally, security and privacy. So you need to ensure security across these edge deployments for all these different network functions across boundaries of edge to cloud continuum. Service mesh provides you this security offloading to a cycle proxy and takes care of your ideal estimation of IPsec leveraging the cycle proxies. So there's a lot of synergy between the two at the same time, set of things, the challenges and the work that needs to be done in order to make the service mesh compatible with edge native applications. With that, I'll hand it over to Mithika. Hi, everyone. So if we want to look at how to make use of the service mesh in a performant manner, one of the things we decided to look at is a characterization of the network. So for example, as you see on this diagram on a particular host, when you have applications, microservices, inside containers, for a service mesh deployment, you would have a sidecar along with the container and an ingress proxy inside another container. And then for a particular number of microservices in the containers, your ingress proxy would be deployed to support that many connections and threads. In the cloud environment, you may be hosting that inside a VM. So as you can see, each of these layers add some amount of performance delays. The Kubernetes environment itself would have a CNI, which will be at layer three level, and then you may have a layer four load balancer. And then all the way to the ingress proxy and the sidecar proxies, you would have layer seven. So each of these layers may be using IP tables in a very basic context. If you decide to use eBPF, you would offload some of it to eBPF. But if your number of pods you want to scale is thousand or above, you will hit a performance wall characterized by QPS and latency. So each layer adds overhead and the tail latencies need to be characterized. A micro architecture analysis is what we will present. And some optimizations and offloads can be discussed in the following slides. OK, so specifically, what did we attempt in the benchmarking environment? We had a setup, which was like shown in the left hand side is a bare metal setup. So when we say bare metal, we had no VMs. It wasn't virtualized. We had services running in pods with only the on-boy proxies, the ingress and the sidecar proxies. The basic experiment was done using FortiO Client as recommended in the Istio benchmarking environment. On the right hand side, what is shown is a more detailed environment wherein you have many different layers already installed inside a client and also on the server. So on the client, you would have the CNI, which is the Calico, with the service mesh client, which is the FortiO Client inside a VM. You could run the FortiO Client all inside a VM, which is also inside the client is inside a container inside a VM, or you could run it directly on the host. So two different client environments, bare metal host and container plus VM on the host. On the server side, we had, again, the virtualized environment with Calico as a CNI. We also had Qproxy in one setting. The virtualized environment was using OBS, TPDK for the control plane switching. For an application, we use NGINX web server hosted inside a container with the sidecar proxy. We had a third level of test where we used XIA for some of the packet level tests. So host a client, client inside a container in a VM and XIA as a client. So three different client environments, talking to the server hosted on one of the worker hosts. Now, the performance measurements were at, again, at different levels. It was queries per second being streamed out through the FortiO Client. The XIA work was largely to figure out what's the MPPS and bandwidth that we get. But the layer seven performance was through FortiO Client with queries per second and latencies. This was tried out on two different Xeon systems, 48 core Xeon and 32 core Xeon. Our current environment is looking at a larger core for a future CPU. Core scaling experiments were done. So from a lower number of cores, like 8 to 10 cores all the way up to a 48 core, clone scaling experiments were done. So less number of clones, like 20, 40 to 100 clones, clones for the ingress proxy, clones as microservices, number of connections, and concurrency were also, were also exercised. So multiple different levels of experiments that were attempted. Now in the next slide, what we'll show is a separate benchmark which was tried out. Even before all of this is deployed, a very basic IP tables benchmark was conducted. Just likely to see whether, what is the bottleneck we hit with the number of flows for a particular size packet. So for example, if you see the graph on top, it is one kb packets and 1000 flows in a 16 core 32 thread system. And it was fixed. We didn't change any of that, but we changed the number of rules. And so when you had one k rules all the way up to 100 k rules, what kind of latency is due to you. So it's a nonlinear increase in latency, whereas throughput went down nonlinearly too. So number of rules has an impact. The goal was to see what is the, what is the bottleneck you reach and what kind of latencies at a nanosecond level are contributed by just the IP tables layer. So besides figuring out what is the overall latency, we would also like to see what does each functionality within the networking layer provide us. The second set of tests on the IP tables was to look at what if the position of the rule is the first position or the thousandth position or the 25,000th position. So we have one graph to show what happens when you have, what is the core scaling performance when you have 25 k rules and you're matching the 25 kth position. So both these two together would tell us what kind of bottlenecks would be faced if we have IP tables based deployment of microservices. Workarounds could be BPF, Hyper scan, offloading IP tables, so that's something we are looking at in future platforms. So as we move on to looking at the performance summary for one of the scenarios, which is 100 microservices with 64 connections, we have three different setup results. The first one is the bare metal. The bare metal performance between the two different Xeon platforms shows us a difference of about 30% QPS and latency differences. So Xeon one had lower number of cores, of course, and so its performance is 30% less. Core to core performance. So if we want to look at exactly the same, that's the last, the right hand side performance, which is just 10 cores of a setup there. You have one set of results with Kubernetes and Calico and no service mesh, no proxy. And another set of results with Kubernetes, Calico and Istio on board with 10 cores. Again, between these two scenarios, you have the client inside of VM or bare metal host client. And as you can see, the layers of networking will not improve the result at all. The client in a VM will give you a QPS of 17K, whereas the host gives you 23K. Similarly, with latencies, you will see large differences, about again, about 30 to 40% differences. The middle one is about bare metal pin cores. We tried that experiment and we did not scale it. It was just a small one. Since the latency and QPS numbers were very large in the hundreds of milliseconds, we did not go ahead and formulate a particular policy around that. But as you can see, these three different sets of experiments will show you different kinds of results based on number of cores available. And definitely for a particular set of cores, the different latencies that get added because of the layers of networking. We will delve a little bit more into this and so I'll ask my colleague to, and I'll build this file out so you can see all the results. Go ahead, Sanku. Yeah, thanks, Mithika. So if we get through one level deeper, right? So we need to look at environments today where service measures are deployed, for example, telco environments. So the Kubernetes containers and service measures are deployed within virtual machines. So when we take such an example here, we have our application retest and NUNIX web server along with the non-YSI coproxy. And the client that's reaching this web server is FOTIO deployed across two different instances. One instance within our virtual machine. So we have VM to VM communication. And the other instance is FOTIO deployed on the host outside of the VM, reaching out to the web server with on-Y proxy within a virtual machine. So when you look at the performance in such a scenario, so do note that here, the data plane is isolated with OVS-DPDK. And so here you have a data plane on the hypervisor, which is leveraging the DPDK. So we see within without service mesh, when we compare the performance, the layer four, you see a good 62% drop in VM to pod type communication and then 32% drop from host to pod communication. And then look at layer seven performance from VM to the NUNIX pod, you see a good 46% throughput drop at 10,000 transactions per second, up to three times the higher latency. Looking beyond 10,000, we're losing transactions so that we can't count much, but at 10,000 we're losing a good amount with 46% three extra latency. And the host to pod, similar, so we see at 10,000 TPS, a good 11% drop versus increase in latency by four times, 400%, which is a lot. So this kind of provides you an impact of a service mesh when you deploy it in an environment like this. Next slide please, I'll hand it off to you. Sure. Another level of, in a deeper level of analysis would be at the micro architecture level. And so within this, the very basic level of analysis that you can do is called a TMAAM or top level analysis at the CPU level. And this is largely for Xeon course. Once you do the TMAAM analysis, you can go deeper into each. So what does TMAAM analysis show you? It can tell you whether your setup of your workload is front end bound, memory bound, back end bound, core bound, multiple other parts of the CPU micro architecture that is getting exercised as you change your workload overall for a certain amount of time. So if you run this experiment for say five minutes, 10 minutes, you can have a TMAAM analysis that can tell you what happens as you increase, say the number of microservices or you increase the number of connections to the server. For this particular one, we had 40 microservices. And what we saw that as you increase the number of microservices, the front end bound activity decreases and the core and memory bound increases. Again, you can do this analysis both on the client side and on the server side. So if you are doing a lot of computation towards, towards inside the microservice, you would see a more core bound kind of activity. If your microservices plus the, plus the service mesh requires a lot of memory and the amount of memory that is allocated is not enough. You will see slowly TMAAM showing you it will move to becoming memory bound. If there isn't a lot of activity which is repeated, so the data from the memory that is captured in there three cache, you wouldn't see that a lot and your layer three bound will decrease for certain number of, for certain number of microservices and it will become front end bound. And hence this is the core of what we would like to go deeper into and look at the specific course where the ingress proxy is hosted versus the application. The course exercised by the sidecar proxy and look at what happens. Do cache misses increase and should I give more cache to such course? If it is more memory bound, is it more memory accesses required or a distributed one? If remote memory within a NUMA setup, if remote memory accesses happen a lot because of the NUMA setting, the memory attached to that particular socket be applied to the microservices on that socket. So these, this kind of analysis requires a more deeper dive and we would probably bring that in a future session. A very basic CPU cycle analysis which you can do using Perftop or VTune is what we attempted in with Envoy and in general the kernel and Envoy is set up without the CNI layer. So this set of results is based on just Envoy bare metal setup. Just to look at how much of the CPU cycles are getting exercised by different layers of software. So here you see three different set of results. The first one is one core multiple clones with the front proxy and a sidecar. The second is one core sidecar, one clone or multiple clones. And the third is multiple core multiple clones, front proxy and back and sidecar. So why these three? Just to look at overall where is the cycles getting used up. And it was good to see that we already knew that the kernel Linux kernel is being exercised for the networking stack and for a number of other activities. Among that Linux forwarding was the top most Linux switching. So switching from the kernel to multiple times by each layer, whether it is from the obvious layer to the Calico to the others. Then in a bare metal host environment where you have just Envoy, which is on the user level versus the networking stack, you see switching happening, especially when you have an overload of connections. And so it is for the same number of cores, you are hosting many more connections and many more microservices. Hence, you will see switching quite a bit. Sibc or scheduling to the course happens a lot largely at the proxy level. And we saw that as one of the higher cycle costs. Envoy match, Envoy memory copy are some areas which only showed up in the second and the third environment. And then buffer plus watermark, which is being used within Envoy was an area of bottleneck that we saw. Some of our analysis would need to be redone with the CNI, with the Calico environment, but specifically to find bottlenecks, this analysis was useful for us to look at acceleration environments. So as a summary, what is it that what's the message we want to give to the community that service mesh forms a very important software architectural framework for forage computing and cloud, I would say, and it will directly correlate with the Etsy Mac framework. Now the performance impact that the service mesh brings in requires analysis requires complex, multi level analysis, not just user mode and kernel stack, but even within the kernel stack. And if you want to increase utilization with the service mesh employment, we need to complete this analysis pretty fast, otherwise production to usage of edge is delayed. To deploy macro services with the service mesh, it is important to identify the right profiling environment. We have many profiling environments, the right environment and the right KPIs. So if the edge has its own KPIs, what KPIs would you map from the service mesh environment, would it be QPS, QPS to license ratio, or just 99 percentile. Those are some some areas we have to look at to maximize CPU usage, number of micro services that you can host with a service mesh, keeping 99 percentile latency from at low milliseconds. So right now we were getting around nine to 10. In future platforms, we project it may become four to seven. However, if you want to do lower than that, what should that deployment be? The team analysis that I showed requires deeper analysis with Emon counters that that's the work we are currently conducting to look at what happens at each layer, the CNI layer, the TCP stack layer, within the IP tables, the NAT, travel cells that happen within the lookups. So all of these efforts, along with the CNCF effort, and I would like my colleague to speak a little bit about the CNCF effort that has gotten started. Sunco. Yeah, thanks, Murtica. Yes, this is a good amount of work on going and getting the CNCF network stick under service mesh working group. So we are looking at deploying some of these different use cases and benchmark in a standardized way. So come up with a simple standard set of benchmarks and provide a standard way of understanding this benchmark or or providing you with a methodology to understand this benchmark. So everyone are welcome to participate in the CNCF network stick as we scale out these tests and run them on infrastructure, different types of infrastructure. Next slide. So with that, it's a little bit of a call to action. So this is part of the network work group as well that you're focusing on. It's building a benchmark that can help you, you know, follow the layer seven benchmarking process with different set of KPIs and configuration modes. And for example, with the layer three, you have a set of RFCs. For example, I've seen two five four four to help you come up with the results with layer seven, which is what we are building towards a set of model workload patterns and methodology to deploy and run those. So to give you a standard benchmarks and and another set of attributes would be run on one system and generate a primitive base data so which can, you know, you can intern estimate the plus capacity as you scale and also addressing different virtualized environment setups with respective cores, queues, memory, memory types, etc. So need to encompass a lot of these different infrastructure environments in order to understand the overall impact of a seven-smash. So yes, you're welcome to come join us in the network safe or feel free to talk to us anytime or reach out to us via email for any follow up questions and open to collaborate with that. Thank you very much. Thank you.
As Edge deployments move closer towards the end devices, low latency communication among Edge aware applications is one of the key tenants of Edge service offerings. In order to simplify application development, service mesh architectures have emerged as the evolutionary architectural paradigms for taking care of bulk of application communication logic such as health checks, circuit breaking, secure communication, resiliency (among others), thereby decoupling application logic with communication infrastructure. The latency to throughput ratio needs to be measurable for high performant deployments at the Edge. Providing benchmark data for various edge deployments with Bare Metal and virtual machine-based scenarios, this paper digs into architectural complexities of deploying service mesh at edge environment, performance impact across north-south and east-west communications in and out of a service mesh leveraging popular open-source service mesh Istio/Envoy using a simple on-prem Kubernetes cluster. The performance results shared indicate performance impact of Kubernetes network stack with Envoy data plane. Microarchitecture analyses indicate bottlenecks in Linux based stacks from a CPU micro-architecture perspective and quantify the high impact of Linux’s Iptables rule matching at scale. We conclude with the challenges in multiple areas of profiling and benchmarking requirement and a call to action for deploying a service mesh, in latency sensitive environments at Edge. The pervasiveness of Edge computing and Service Mesh constructs within a cloud native environment have almost been at the same time during last few years. Requirements of Edge compute to be able to unify both Information & Communication Technology (ICT) and Operational Technology (OT) have brought together cloud native deployments and microservice based service offerings to the Edge infrastructure]. While Kubernetes been the most popular model of deploying cloud native infrastructure to offer software services, service mesh is the emergent application deployment paradigm that decouples application from developing most of the software defined networking aspects of microservice interactions. This paper introduces features of service mesh that are architecturally suitable for Edge compute service offerings and application development principles. To understand applicability of service mesh, architectural principles need to be understood to figure out suitability of various benefits mesh benefits to customized Edge deployments. This talk introduces and correlates various Edge requirements to the service mesh’s architectural guidelines. Then further dig into deployment considerations of service mesh with Edge deployment types to provide practical communication challenges between the two. This talk: - Provides benchmark tests and their results that provides the impact of service mesh on simple Kubernetes based deployments using Istio & Envoy as service mesh and its sidecar proxy, that can be leveraged for Edge environments. - Provides detailed analysis of the software used to identify bottlenecks using Top-Down Microarchitectural Analysis and CPU Hot Spot analysis. - Summarizes the gaps identified during the detailed testing of these open-source components - Showcases the impact of utilizing service mesh for edge computing.
10.5446/56958 (DOI)
Hi, my name is Nathan Brown and I'm a software engineer at ARM. I'm really excited to talk to you today about faster memory reclamation with dbdkRCU. Specifically, I'll be comparing the dbdk and usespaceRCU libraries. First, I'm going to start off with a brief background of what RCU is and what problem it solves. Then, I'll talk about key advantages dbdkRCU brings to the table. Next, I'll walk us through some performance results I have gathered comparing dbdk and usespaceRCU and then we'll have some time set aside for Q&A. Before I get started, I'd like to thank my mentors at ARM, Darmic, and Hanapa, as well as the broader networking team I'm on. With all of their help and feedback, this presentation wouldn't have been possible. So thank you. Alright, let's get started. What is RCU? Well, with most things, we can start with a data structure. But since we're implementing a multi-threaded and high-performance application, we're going to want this to be a lock-free data structure. And while in general, RCU can work with any data structure you provide, fair simplicity in this presentation, let's just stick to a link list that has three elements. A, then B, followed by C. Now we can have multiple readers traversing through this link list, accessing elements, more or less doing what they please. And because it's lock-free, we can also have a writer thread that is about to do some updates. However, this raises a very interesting question. How can the writer thread safely remove and free element B? Well, the lock-free link list algorithm tells us how to remove element B from the list. We can atomically update A's next pointer to point to C instead of B. In this way, new readers such as reader1 will only be able to see elements A, then C in the list, and won't be able to discover B. However, current readers like reader2 and reader3, that are already accessing element B, will still be able to see a well-formed list and traverse onwards to element C when they're ready. But this idea all that brings about a bit of confusion. How can we safely free element B? We know we can't do it right now, because if we free element B, then reader2 and or reader3 attempts to iterate forward in the list, they'll end up accessing element B after it's already been freed. In short, we'll have a use after free error. So we're going to have to wait. But how long do we wait? We need to wait until all readers have stopped referencing element B. Only at that point in time will it become safe to free element B. But programmatically, how can we do that? How do we know how long to wait? That's what RCU provides. RCU is a suite of mechanisms that enable you to safely reclaim memory in lock free context, by letting you know how long you need to wait until that memory is safe to free. The way it works is that readers report quiescent states to signal when they're no longer accessing shared state. So when reader2 and reader3 in our previous example stopped accessing elements on the list, they both entered a quiescent state. We have grace periods for deleted elements, which is the time between the element being deleted and all readers reporting a quiescent state. It sends up being the lower bound on the delay between deleting the element and being able to reclaim its memory. In short, it's only safe to reclaim the memory after this grace period ends. We also like to measure the delete free delay, which is the actual delay between deletion and freeing for an element. While in theory this is lower bounded by the grace period, the actual realized delete free delay may become higher than that due to an RCU's implementation. Pictorially, it may look something like this. We have reader1 and reader2 entering quiescent states and accessing the data structure, quiescent states, data structure, quiescent states, so on and so forth. When the delete happens, which is the far left bar here, the time until the grace period is up is how long it takes for both readers to enter the quiescent state in green. However, the freeing may occur much later, and thus the delete free delay may become longer than the actual grace period. Use-based RCU is a most popular use-based implementation for RCU at this point in time. It's available at this URL. Its API is based on the Linux kernel RCU API. In it, synchronizeRCU is a function that starts detecting and blocks until a grace period is over. So, in our element b example, the writer might delete element b, then call synchronizeRCU to start and block until the grace period is over, then once synchronizeRCU returns, since the grace period is over, that means that the memory occupied by element b is now safe to reclaim. So, the writer is able to free element b. However, we're blocking in the writer thread, and that might not be the most desirable thing. So use-based RCU also provides the API of callRCU where you provide a callback, and that callback will be invoked by a background thread after the grace period is over. So, the writer might delete element b, and then call RCU and provide enough information such that the background thread can then execute the callback and free the memory associated with element b once it is safe to do so. However, with this background thread approach with use-based RCU, the grace period detection might not start immediately, and thus the delete free delay might end up being longer because the grace period detection doesn't start immediately. dbdkRCU, on the other hand, is based around the idea of tokens, where a token more or less correlates one to one to an element. So, you can get a token by RCU start, and this will start grace period detection for that token. Then, at a later point in time, you can call RCU check and provide a token given to you from RCU start. And RCU check will tell you if the grace period for that token is over. So, the way our writer might use this is it will delete element b from the list. Then it will get token b via RCU start. Now, it knows that it has to wait some time for the grace period to become over, so in the meantime, it's going to do some other useful work. Then it can come back and check, hey, is this grace period over? And if it is, it knows it's safe to free element b. Otherwise, it can continue doing useful work. Now, an immediate advantage you may have noticed is the increased flexibility dbdkRCU brings. We weren't blocking in our writer thread, but we also didn't have to rely on yet another background thread to manage our memory reclamation for us. Now, of course, there's nothing stopping us from having a background thread dedicated to memory reclamation, but we don't need to have one either. The way that we can use the background thread in dbdkRCU is we generate the tokens in the writer, then pass these tokens off to the background thread, then the background thread consistently checks the tokens and executes callbacks to reclaim memory as necessary. This is somewhat similar to how use-based RCUs background thread works. Something else I haven't mentioned yet is that dbdkRCU enables you to partition threads into different RCU groups. For example, let's take 10 threads. Let's say we know that threads 1 through 5 are accessing hashmap A and only those threads. Likewise, threads 5 through B are accessing hashmap B. Thread 5 is a bit unique here in that it can access both hashmaps. However, thread 7 is a bit slow and it's accessing hashmap B and it's taking some time. It's not going to report its quiescent state as quickly as say threads 1 through 5. So if thread 7 is never going to access hashmap A, but it's the one running slow, why should grace periods become elongated for hashmap A? They don't have to be. We know that thread 7 isn't ever going to access hashmap A and it has no information about the contents of hashmap A or its elements. And dbdk allows you to model this with these different RCU groups. However, with useFaceRCU, all of the threads are global and treated the same. So it doesn't have the context needed in order to be able to provide this additional flexibility. Additionally, in our benchmarks, dbdkRCU ends up on average producing faster memory reclamation than useFaceRCU. This is great. Faster memory reclamation generally leads to a lower memory overhead and less memory fragmentation. Additionally, dbdkRCU is able to sustain this faster memory reclamation even with preemptible readers. When a reader can be preempted, it can't report a quiescent state which can elongate grace periods and delete free delays. However, even in the face of this, dbdkRCU is still able to perform faster than useFaceRCU, which is great. Finally, dbdkRCU is integrated into some dbdk data structures already, for example, RTE hash. This means that with some minor code modifications to the initialization of your data structure, you can start using dbdkRCU today and already start reaping the benefits. So a little bit about how I benchmark delete free delay. I took dbdk's lock free hash table, pre-populated it with four million elements, allowed multiple readers and a single writer to start accessing it randomly. So the readers will be looking up random elements and the writer will be randomly inserting and deleting random elements. In order to simulate the writer doing something else rather than just modifying this hash table, I limited it to about 5k deletes per second. In order to reach this limit with useFaceRCU, I needed to use the callRCU method of reclamation rather than blocking within the writer thread. This means that for useFaceRCU, there's an additional thread going on. So in order to have a bit more of an interesting comparison with dbdk, I ran dbdk under two different configurations. One where dbdk had a background thread where the writer thread would generate the token and then pass the token and callback information to this background thread, which would then reclaim the memory as soon as it was able to. And another one where there was no background thread for dbdk. Without the background thread, when a writer was performing a delete, it would check and see if it could reclaim memory for any previous deletes. And then if it was able to, it would invoke some callbacks and reclaim the memory at that point in time. So here's some results. On the y-axis, we have deleteFreeDelay in milliseconds, and on the x-axis, we're varying the number of readers. In this configuration, we're giving each reader its own core, so we don't have to worry about preemption yet. And here we can see that I have dbdk, then use space rcu, and then dbdk with a background thread being measured. And we can see, wow, the performance results are great. dbdk is consistently outperforming use space rcu, as is dbdk with a background thread. In fact, we see that dbdk rcu is 72 to 77% faster to reclaim memory than use space rcu. And if we add in a background thread, it becomes 79 to 88% faster. That's great. However, what happens if we put all of these readers on one core? When we have all these readers on one core, only one reader is able to make progress at a time, and the rest of the readers will end up being preempted. Here we can see a bit of what I was alluding to with the poorer performance once preemption comes into the picture. Use space rcu, for example, goes from 6.6 milliseconds of delete free delay all the way up to 154.7 milliseconds of delete free delay. That's a huge increase. And while dbdk and dbdk with a background thread also have increases as the level of preemption increases, it's not quite as bad as use space rcu. In fact, dbdk rcu ends up being 41 to 61% faster to reclaim memory than use space rcu in this configuration. And once readers start being preempted, dbdk with a background thread and without a background thread end up performing very similarly. However, a one core workload isn't all realistic for every workload out there today. So I also took eight readers and spread them across various numbers of cores, ranging from one core to two, four, and eight cores. So here in the middle are the two most interesting graphs. So we have eight readers on two cores and eight readers on four cores. So we both have multiple cores and some amount of preemption on each core. And as we can see, dbdk with and without a background thread still consistently outperforms use space rcu in this configuration. dbk rcu ends up being 29 to 72% faster to reclaim memory than use space rcu. And with preemption, dbdk performs similarly both with and without a background thread. An interesting conclusion from this last point is that dbdk ends up waiting more for the readers to report their quiescent state than it is for waiting for a background thread to try and reclaim memory as fast as possible. So that implies that preempted readers are certainly hampering the performance of these rcu implementations. Thank you very much for your time. I'm looking forward to hearing your questions. And so as such, our implementation has been sort of tailor fit for dbdk. But the idea of tokens for rcu is broadly applicable. So if you want to make use of our implementation, it'll have to be within dbdk. But if you want to use the token-based rcu, it shouldn't be too difficult to end up porting the token implementation to some other projects as well. Okay. So you mean it's inside dbdk, but we can use it without using the rest of dbdk if needed. And the other part of the question is, is there something in the implementation which is specific to dbdk? How is it better than librcu for dbdk? I see. So the main advantages over librcu comes from the token design and the greater flexibility and better performance. So for those reasons, those are why we opted to write our own rcu implementation for dbdk rather than use librcu. And since we were writing the implementation for dbdk, we ended up using some of dbdk's other libraries like the ring data structure or some of the EAL functions. Okay. if we want to use it inside dbdk, we need to have this rcu integrated in the dbdk data structures. You said it's already interpreted for hash's data structure. What about others? I'm not sure about which others, but integrating into the data structures, only one of the easier ways to use rcu. You're still able to use the main API functions directly yourself, reporting quiescent states, getting tokens with rcu start and using rcu check to check if the grace period for the token has ended. And so we've integrated these functions into the rte hash data structure, for example, which you can, of course, draw inspiration from. Okay. Is there someone asking if you plan to write a paper about this work, about this new rcu? Yes, that is a project currently underway. Okay, cool. I suppose you are also discussing with the father of rcu, Mr. McKinsey, if I remember where. Yes. I believe he may have been involved with the reviewing of the presentation. Yes, right. Okay. Last question, you are talking about partitioning threads. Can you explain more about that? Sure. So libudurcu treats all threads the same, all of the reader threads the same, whereas with dbdkrcu you can form different groups of reader threads that are independent of each other. And so if one group ends up not performing as fast as another group, or perhaps there's more of them, so synchronization takes longer, you can end up separating them entirely.
DPDK added a RCU library with a novel method to reclaim resources. We have been running tests to understand the performance differences between the DPDK RCU and the user space RCU library. In our tests, we find that DPDK RCU can perform reclamation faster and perform significantly better when pre-emptive readers are involved. Other than the performance, DPDK RCU has several advantages such as not requiring a background thread for reclaiming resources and the ability to integrate with existing libraries without having to modify the application. This talk will present various testing done on DPDK RCU and the user space RCU library and their results. It will go into the details of pre-emptive reader problem, which affects use cases beyond DPDK, and show that DPDK RCU library can reduce the reclamation time
10.5446/56959 (DOI)
Hello everyone, my name is Hedy, I am a software engineer at Cisco and I'm glad to present to you with Arthur the statistics consumption model in VPP and some of its applications. Hello everyone, I'm Arthur, I work at Cisco and I'm part of the team developing the vector packet processing software aka VPP and we use it in different projects in Cisco. So I hope you have not eaten too much because you'll need some appetite for the statistics and we'll have a look at the stats segment in VPP and also at various clients for the statistics. So first a quick intro on VPP and I think some of you are already familiar with this software so I'm going to be quick on this one. So VPP is an open source networking data plane working in user space. It is highly optimized for performance, speed and scale and it is designed as a graph of nodes. So we take some vectors of packets as inputs. The packets go through different set of nodes depending on their contents and they can be dropped, can be sent to another interface and so on. So what is the statistics segment in VPP? It is a model based on shared memory. We use vectors and we make the exception that we have a single writer which is the main threads that can add new counters to the statistics segment or that can extend some existing counters and that we have also multiple readers. We also make the assumption that the number of writes is really low in comparison to the number of reads and that's why we have adopted the optimistic locking strategy. What is important in the stats segment is that reading is really, really cheap. Actually, it costs nothing because we don't stop the data plane at all when doing the reads. We just read the shared memory. However, this means that it costs a bit in the data plane because we have to update the statistic counters and that may induce some cache line misses. So let's have a look at the shared memory layout. We have a main header containing the directory vector. It also contains an error vectors and an epic counter and in progress Boolean for the optimistic locking part. The directory vector consists of a list of statistic entries. Each entry may have a given type. For example, the simplest type is just a scholar. We have the example here of the number of worker threads. The data is simply the value of the scholar. We can also have a vector. For example, the interface Tx is a vector indexed by threads and interfaces. The data is simply the pointer to the array. So we have here the number of packets and the number of bytes per interface per thread. Then we have a sibling type. This type contains two indexes. The first one is the index of the entry in the directory vector and then we have the interface index which is the index of the row in the array. Note that when doing some updates in the data plane, we have the pointer to this array stored in the data plane so that we can directly access it. Now, let's talk about the optimistic locking parts. First, taking the lock means setting the input rest Boolean. Releasing the lock means un-setting it and incrementing the APO counter. When we do a write, a test, we add a counter to the stat segment or we extend one. For example, if we add a new interface, we take the lock, we update the directory vector if needed so pointers may change. We update also the counters vectors and then we release the lock. What it means for readers is that they need to wait until in progress is un-set and they need to store the input value, they read what they want to read and at the end of the read, they need to check that the epoch has not changed and that the in progress is not set. If it's not true, it means that there has been a write during the read, so we have to redo the read and we try again. We have no concurrency issues in the stat segments because when adding or extending the counters, it is done under a barrier by the main threads and this means that the work-route threads cannot update the counters in the meantime. Now, let's present some different clients we have in VPP that use the stat segments. The simplest client we have is a simple executable that connects to the stats. It's to have info about the shared memory location. It allows to list or dump the statistics available and it takes a list of read as input and which returns the matching counters. So either a list or a dump of the counters depending on the comment. So it is really handy to have a quick access to the statistics. It is easy to launch. It is to launch, sorry. So it's pretty handy to have raw access to the statistics. Now we have also developed a fuse file system in Gulleng thanks to the GoFuse model. In this model, we have different type of nodes. We can define directory nodes and file nodes. So directory node contains the bus in our case and it keeps the trace of the epoch. Then we have also file nodes which correspond to the counters and in these nodes we store the counter index so that we just have to look at the given index in the directory vector file instead of having to fetch the pointer each time. So let's take an example. When you open a directory in the file system, we look at the current epoch. If the epoch has changed since the last update of this directory, we need to update all the subdirectories and files into the subdirectories so that we are sure that there were no more ads or deletions in the meantime. And if there were some files added, for example, we could still update the directory and add the given counter to the directory. It would be the same if we add some deletions, we would just delete the counter inside this directory. And note that these updates are done recursively so that the subdirectories are also updated. When we want to read a file this time, we just access the given counter in the directory vector and we get the value as output. So now let's have a quick demo of this. On the left hand side, we have an instance of VPP running and we can access the interfaces in VPP, in the CLI. So for now we have only the default interface local0 as expected because we have not run any traffic. Now we can try to install and start the start file system as simple as a make install and a sudo make start. Here we go. So now we can try to access the file stem directory. So it's named startsfsdir and we can Ls all what we have in that directory in order to fetch all the data from the directory vector and update the epoch counters. So now we can have access to the interfaces name and as expected there is only this local0 interface. Now let's try to add some other interfaces and run some traffic with a little script in the VPP CLI. And here we go. Let's try to access the interfaces name again and a bunch of new interfaces have appeared. So let's try to access the counters for the new interface pgo for example. So we go into the directory interfaces pgo which is a simulink directory and we have a bunch of counter files. So let's try to for example cat inside ip4 and we have the number of packets, ip4 packets that have gone through these interfaces. We can also access the files corresponding to the Rx and Tx packets and bytes and here we have all the info we wanted for the Tx and Rx packets and bytes index per threads for this interface. Now we can also access some statistics about the processing nodes of the VPP. So we need to go inside the nodes directory and then we need to go inside the for example ip4 input directory which is the directory for the ip4 input node and see what we have. So we have a bunch of other counters and we can access them. So for example we can access the number of vectors per thread and there we are. Then we can also access the number of clocks or calls in order to have the number of calls for this node in VPP and there we are. I'm going to present another way of consumption for our VPP stats which is the Prometheus exporter that gathers the statistics. Our use case this time is Calico VPP. So what is Calico VPP? Calico is an open source Kubernetes networking solution and network policy. It also supports other platforms. It manages networking between pods, nodes, VMs. The main goal of integrating VPP is to accelerate the networking of Kubernetes clusters that use Calico. So instead of using standard Linux networking pipeline with IP tables as a data plane, nodes run the VPP data plane and we provide faster networking to their pods without requiring any changes to the applications running in the pods. This is really meant to be transparent. So when running VPP we do not have any additional requirements compared to regular Calico. When we use the VPP data plane, the VPP instance is inserted between the host and the external network. So on startup the host interface is replaced by an ump link interface and a ton interface to keep communicating with the outside as normal. As for the pods, VPP creates a ton interface for every pod. So VPP handles interfaces and packets transmission and creates a ton interface for every pod. Thus, it is very useful to gather statistics about pod interfaces such as Rx packets or Tx packets or number of errors, etc. So this is a high level view of the agent that you get on each node when you deploy Calico VPP. The agent is the processor responsible for all the Calico specific configuration in VPP. On this container we have all the runtime configuration of VPP in the form of running servers. The CNI server, the routing manager, the services manager and the policies manager. They interact with the Kubernetes and Calico APIs to configure VPP. In order to export our VPP stats as metrics, we add the new component, Prometheus server. As part of the agent, Prometheus server knows the state of the CNI server and the created pods interfaces. And every interval of time, it fetches the real time statistics through the VPP API in Calico VPP, which accesses the stat segment chaired memory and gets the statistics needed. So among stats, we actually select ones related to pod interfaces. So let's take a deeper look at how this works. Here are the nodes of our Kubernetes cluster. Pods are created on nodes, VPP adds interfaces for these pods, and we'd like to collect those interfaces' stats. To expose Prometheus metrics in our application, we need to provide a slash metrics HTTP endpoint on every node that we call a node exporter. Every node exporter is running on the Calico VPP agent. So it accesses the stat segment chaired memory through the VPP API, converts stats into Prometheus metrics, then exposes them on the HTTP server. How Prometheus server needs to be configured so that it targets those endpoints. Prometheus uses an HTTP pool model in order to export our statistics in the form of real time metrics recorded in a time series database. So targets are the slash metrics HTTP servers running on Kubernetes cluster nodes, and metrics come from our VPP ton interfaces statistics. This the performance of the interfaces in our system is displayed as a nice graph. Now let's watch this demo to better understand the feature. So we have these nodes in our Kubernetes cluster, and here are our code HTTP node. Every node has an HTTP endpoint that provides pod interfaces statistics. For example, on this node, we have these metrics. For configuration, this is how Prometheus is configured. We can see here the node IPs and the port as targets. Prometheus is serving on the 1990 port where it has access to the cluster to collect metrics. So let's take a look at the graph. Here are the different metrics provided. Let's select TX bytes, for example, for transmitted bytes, execute, and then this is the graph. So in the last five minutes, values are at zero. Let's run a test to see what happens. In a few seconds, we have this flow of the traffic for our test. Thanks for watching, and any questions that you might have or will come up. Okay guys, thank you for your talk. Let's start with a few questions. So how to coordinate information known by BPP, in G, the interface names, with the indexes you get in the share memory counters? So we have a couple of counters that list all the interfaces names and nodes names that you can also access in order to have the correct indexes in the other counters, and you can just map the indexes you get in the names files to the indexes you have in all the files and you get the stats interface, for example. Okay, nice. In the KQPP use case, how do you coordinate the application logic known by the control pane with the interface indexes? So let me answer that. The information coming from VPP stat segment concerns all VPP interfaces and are defined by indexes, and the application logic known by the control pane knows which container is giving the information and for every interface which pot is concerned, and the correlation between both is simply done by the fact that they are running in the same process. So Prometheus server is a part of Calico VPP agent control pane, and it runs the Go application that accesses the VPP stats. So this correlation is important because it helps select pod interfaces among the available ones. Okay, thank you. In case of the time, I think more questions could be answered while typing a chat. I want to thank you guys for coming around and have this nice talk. If you've got any questions, type it in the network there from chat and if you want something out, we've got 20 or 40 seconds left I guess. Thank you. Thank you for listening and thank you all for joining to this talk. Thanks. Have a nice day in Fossum 2022. Bye. Bye. Bye. Bye.
VPP (aka Vector Packet Processing) is a fast network stack running in Linux userspace. It is designed to handle packets with high performance, which makes gathering statistics efficiently a must have. The model that has been chosen in VPP to provide up to date statistics is built upon shared memory and optimistic locking. The counters are updated in this shared memory at a rather low cost by the data plane and can be read out at almost any time by all the consumers. We will first describe this model in more details. The consumption of these stats may take various forms depending on the use case and the application needs. That's why we have developed different high-level components to access them: 1) A filesystem in userspace: thanks to go-fuse, we can mount a filesystem ordering statistics in folders and files, in a similar fashion to '/proc' in Linux 2) A Prometheus agent: applied to Calico/VPP, a new dataplane for Calico - the popular cloud native Kubernetes network plugin - based on VPP. Prometheus is integrated as a monitoring tool in order to export our statistics in the form of real-time metrics collected from targets. Metrics come from our pod interfaces statistics, and targets are Calico/VPP agents running on our Kubernetes cluster nodes. During the presentation, you will have a quick demo of these components.
10.5446/56960 (DOI)
Hi and welcome to this FOS-DEM talk. Kubernetes networking is there a cheetah within your Calico. It's about even faster Kubernetes clusters with Calico, VPP and MIMF. I'm not actually the lead speaker today, that falls to Nathan Scribzak. He's a software engineer at Cisco and a Calico and VPP integration contributor. He's a biking and hiking enthusiast and even enjoys CK hacking. We will get to him in just a couple of slides. In the meantime, my name's Chris Tomkins. I'm a lead developer advocate at Tigera, the primary contributors to Project Calico. Today's obsession for me, I'm trying to learn Japanese on Duolingo, but I'm getting nowhere quick and I'm listening to lots of music. I especially enjoy Rusty if you like music, check him out. My role is to champion user needs and support Project Calico's users and contributors. I'd like to start by giving you a quick overview of Calico, how it works and some of the lower level design decisions that the Calico team made that have helped to enable some really awesome work done by Alois and the VPP team at Cisco. We have a short talk today, so we'll need to be brief in order to allow time for questions. Keep in mind that you can learn a great deal about Calico at projectcalico.org and about VPP and its use of MEMIF at fd.io. With that said, the Project Calico community develops and maintains Calico. Calico is an open source networking and network security solution for containers, as well as virtual machines and native host based workloads. Calico supports a broad range of platforms including Kubernetes, OpenShift, Mirantis, Kubernetes Engine, OpenStack and BareMetal. It's really battle tested and can operate at huge scale. You can lock in step scalability with Kubernetes clusters without sacrificing performance. It offers granular access controls including a rich framework and security policy model for secure communication and full Kubernetes network policy support that also works with the original reference policy implementation of Kubernetes. The main benefit we are building on for today's talk though is Calico supports multiple data planes including VPP, IP tables and EPPF for the right fit across even heterogeneous environments. Whatever the feature set you have available or don't on your cluster in terms of Linux kernel, hardware support and underlay physical network, we should have the data plane that gives you the best performance and features. I won't spend a long time talking about what a data plane is after all the audiences are all network engineers. You probably know a control plane is responsible for figuring out what's going on in the network, the consensus of high level things such as routing. It's typically implemented on a general purpose CPU. It manages complex device and network configuration and state. The data plane is different. It's responsible for moving around traffic and should be responsible for nothing else. Therefore it can make really good use of hardware acceleration features. It should be designed to be the simplest possible implementation of the required packet forwarding features. It implements a fast path field traffic and I like to give the success of MPLS as an example of a great data plane. It has a lot of unnecessary functionality torn out of it so it doesn't have things that IP has such as variable length subnet masks and check sums and therefore that leads to minimal processing per packet and fast affordable devices. Control plane and data plane separation achieves a lot of things. It achieves specialized minimal data plane code and a targeted data plane feature set. It achieves code reuse in the control plane and future proofing. It means that the data plane can be very adaptable and it provides agility for the end user because you can match the feature set to what you really need on your custis. Calico offers several data planes. The Linux IP tables data plane which is heavily battle tested offers good performance and great compatibility and wide support. We have the Windows host networking service which allows Windows containers to be deployed and secured and we have a Linux EVPF data plane which scales to higher throughput and uses less CPU per gigabit. It reduces first packet latency for services and preserves external client source IP addresses all the way to the pod. It also supports direct server return for better efficiency. But today we will be talking about a new data plane and the features it offers especially around Memev. And that data plane is VPP so I'll hand you over to Nathan to tell you more. Thanks Chris. So first a few words about VPP. It has been presented in many talks you most probably have seen a slide like this one so I won't spend too much time on it. But in short VPP is a user space network data plane which is highly optimized for packet processing and at the APL level as well. It relies on vectorization to provide a wide range of optimized L2 to L4 features from NAT, tunnels to TCP and Quik. It is also easily extensible through plugins which is something we are leveraging a lot for the Calico integration. If you'd like to learn more don't hesitate to go on fd.io there are plenty of resources available out there. So Chris did speak about data planes and the fact that Calico already supports a few ones. So the question is how do we become one? That's what we ask ourselves when starting the Calico VPP integration. In order to make this happen we did build a control plane agent running as a daemon set on all nodes and we did register it as one of the available data plane options. This agent is responsible for starting VPP, listening for Calico events and programming the data plane VPP accordingly. We also built a couple of custom plugins with optimized implementations, doing NAT for service load balancing, implementing the Calico policies specific logic and so on. We tweaked the VPP configuration to make it friendly to use in a container oriented environment. Using Interim mode for example enabled running without huge pages, leverage hardware and software uploads and so on. With all this we add all the breaks to run VPP powered Kubernetes clusters. So let's do that. Ok but first what happened under the hood? Essentially, what we do is we swap the network logic that was previously happening in Linux to VPP. Now because VPP is a user space stack we have to do a few things differently compared to what was previously done by Linux. In order to insert VPP between the host and the network we will grab the host network interfaces specified in the configuration and consume them with the appropriate drivers. We then restore the host connectivity by creating a turn interface in the host root network namespace. We replicate the original Oplik configuration on that interface, the addresses, the routes so that things behave similarly from the host endpoint. Pods are connected just like the host with a turn interface in each of the pods named interfaces and the Calico control plane is running normally on the host and it configures the data plane function directly in the UV. Since we use turn interfaces also and not vith we don't need to worry about layer 2 in the pods which better matches the Kubernetes model. But now you may ask why do all this? What does this allow us to do? First having the data plane in user space makes evolution easier to implement and deploy. This allows us to add new functionalities for example maglev load balancing for services or IPsec or service 6. It also enables experimenting with the network model, for example exploring how to expose multiple networks to a pod. But most importantly regarding performance with this we can look into optimizing both the network logic running in VPP as well as the way pod consumes it. That gives us two good areas to start optimizing how fast the Calico cat can run. Let's focus on performance. The first question is what are we trying to optimize? The way application usually consumes packets is with socket APIs. It's quite standard but you have to go through the kernel and it's a code path which wasn't designed for the performance levels of modern apps. That's actually why we came up with GSO as a network stack optimization. But here as we have VPP running on the nodes it would be nice to be able to somewhat seamlessly bypass the network stack and pass all packets directly to VPP without having to touch the kernel. That way we might also spare a few copies on the path. And to do that fortunately VPP provides two ways for an application to attach and consume packets without touching the kernel. The first one we have are memory interfaces or MMS. They are standard for exchanging packets over a shared memory with several highly optimized clients implemented. You have once in Go, in C, you have VPPDK and obviously VPP supporting them. Basically from the app's standpoint when using those clients you get a handful of functions for receiving and sending packets a bit like what you would do with AF packets in Linux. The second way is VPP's host stack. It's a set of optimized L4 protocols implementation living in VPP. We have TCP, UDP, TLS and QWIC and a few others available. And this allows VPP to terminate the connections and make those stream or datagram content available to the client app through a shared memory. This memory can then be consumed with a dedicated library called the VCR, the VPP comms library. And similarly when linking against this library your app will be able to leverage, connect, accept, receive and send primitives talking directly to VPP. So those two methods allow us to build two consumption models. If we're requested we expose a memory interface, a MAMIF in the pod with the same configuration as its regular interface. This can for example then be leveraged by an application handling small UDP packets at high speed. It can do so with either Go MAMIF, Lib MAMIF, DPDK or maybe another VPP running in the pod. And we can also expose VPP's host stack in the pod if we're requested again. That way an application handling TCP, TLS or quick flows can with the LibVCL connect or accept directly in VPP that way by passing the protocol implementation in Linux. All this is exposed with simple pod annotations and it enables full user space networking with zero copy from the app to VPP. While still being able to also run regular services like DNS through the VPPI because we also keep the regular net dev configuring the pod. So let's see how fast this can go. We'll take a reference configuration using regular Linux. So here on the right hand side we have a server running a traffic generator which is T-Rex sending UDP packets as fast as it can over a 40G link. On the left hand side we have a Kubernetes node running a test pod where we will measure receive traffic. We don't send the traffic back to the generator in order to keep the setup simple but it shouldn't impact result this much because in the end we are speaking about packet processing capabilities so adding the return traffic is just about doubling the number of flows or packets per second. In the client pod here is Linux so we will use the EFPPS utility directly on the pod interface to see how fast Linux drops to receive packets as here we have no application actually reading packets. But if we add an additional limitation of this setup to keep in mind is that you will often need AF packet or AFX DP to get the best performance with small packet out of the net dev which is exposed in the pod and thus that will require elevated permission for the pod so that's something to know. So if we take the same situation and we install Calico VPP instead the uplink interface will end up being owned by VPP and the pod will still have an interface which will be a turn interface with the VertiO backend. So here we will benefit from user space networking with the uplink side but our packets still have to go through the column which will still limit performance even though we are leveraging the VertiO optimized backend. Now let's modify our setup to use the memif instead of a then create the third configuration. Here our client pod on the left has to support to be able to attach to memif interface that's quite straightforward here as we are running another VPP instance within the pod which obviously is able to attach to memif and by doing so the packets will be fully enrolled in this configuration in user space from their reception on the physical interface to the delivery to the app. So let's see how those three setup compares when receiving packets. If we send small UDP packets 64 bytes what we see is that a regular VertiO interface is able to sustain about 3 million packets per second. A VPP worker can handle 8.7 million packets per second when processing 10,000 different flows. This drops a bit when the number of flows grows mainly due to the filling of the common flow table and here we are showing the performance of 10,000, 100,000 and a million flows. This scales linearly with the number of VPP workers meaning that with 4 workers and 10,000 flow we are able to receive about 33 million packets per second which is about 8 times 4. All this concerns traffic going directly to pod address. It also works with service IPs for which we have a performance penalty of about 5% compared to pod IPs. This related to the rewriting of the address and port and the fixing of the checks and we have to do. The packets per second are not always very explicit so let's see with bits per second. We'll send 300 bytes packets and extrapolate the throughput out of packets per second. With bigger packets the link quickly becomes the bottleneck because it's only 4G. We can still see a pretty linear scaling at least between 1 and 2 workers with one VPP worker being able to process 15 Gbps and 229 Gbps when handling 10,000 flows. Linux receives about 300 Mbps which translates to about a million packets per second which is roughly the Linux limitation of string interfaces. But you might say Mimfs are great but I'm doing TCP and on top of this I'm using Envoy as a proxy so can we also optimize this? I'd say this qualifies as a great use case for VCL so let's build another testbed and see how it performs. So we took the same machines, the same two machines, on the machine on the right we changed the packet generator to being WAK in order to track requests per second instead of flow packets per second. Data is served by NNJNX on the same server but obviously we don't request to NNJNX directly. We have our same cluster on the left running Envoy in a pod acting as a proxy between WAK and NNJNX so basically we'll be benchmarking packets going through Envoy. The same way if we enable the VPP data plan in Calico the setup will end up looking like this. This should already allow some performance gain with the benefit of running Envoy unmodified. Similarly, if we request VCL support and leverage the VCL Envoy integration that Florent-Cora has implemented we are able to make the proxy TCP termination happen directly in VPP. The Envoy team build daily contribute images with this integration so it's really easy to leverage in a pod. So let's see what figures this free setup gives us regarding WAK. So the request per seconds we get are as follows. With all data planes Envoy scales a bit subliminally. On Linux it goes from 13,000 requests per second with one worker to 100,000 requests per second with 10 Envoy workers. EBPF which is the Calico EBPF mode performs quite limitally. As we are running NNJNX and WAK out of the cluster and we are targeting pod IPs we are benefiting a bit less from its advantages. Calico VPP with regular Avertio net dev so the second configuration goes from 16,000 RPS with one worker to 130,000 requests per second with 10. And finally Calico VPP with VCL the third configuration gives the best result reaching 200,000 RPS with 10 Envoy workers. But comparing setups is a bit tricky here because with VPP so both VPP, Vvertio and VPP VCL we have one dedicated core handling the networking. Whether with Calico Linux and EBPF the networking happens in the kernel on the same core than Envoy. The overall comparison would be between NNVWAKers running with VPP and NNJNX plus one NNVWAKers running with Linux. But an even fairer comparison would certainly be to plot RPS by CPU usage on the cluster and see how those configurations scale in regard to each other. But first let's take a look at latency to see how those configurations behave also. Here we have the various latencies measured when scaling Envoy from 1 to 10 workers. We see it globally improves as the number of Envoy workers increases and in some cases we start to see latency re-increasing mostly when the data plane struggle starts to struggle. We can see that VPP with VCL performs quite well because as we are terminating TCP directly in VPP it allows us to skip the extra hop of going through the ton and thus allows us to keep the latency quite under control. And finally if we come back to the request-pass second results and we plot them alongside the global CPU usage measured on the machine running Envoy, we get the following graph. So dots represent tests within an increasing number of Envoy workers from 1 to 10. The NNN plus one comparison issue where I mentioned earlier appears clearly at the bottom left of the graph. Envoy Linux with two workers falls approximately in the same ballpark as Envoy VPP with one worker and this due to the extra VPP worker we use in the VPP case. This leads to the performance discrepancy only making itself clear with a higher number of workers. For example with 5 Envoy VCL workers and 1 VPP we are serving as many RPS as with 10 Envoy Linux workers. An additional improvement area is that we are still running VPP in pole mode so switching to interrupt mode should improve CPU usage as here we are busy looping one CPU with a VPP that's not fully loaded so we're essentially wasting some unused CPU cycles. So we are definitely planning on making this work and testing it soon. That's it for the numbers we got in the last batch of optimization. I would like to thanks Chris and the whole Calico team for their help and support that allowed us to build this and I'll let Chris conclude on the next step and how to stay tuned on what will be happening in the coming month. Thanks. Thanks Nathan. So in summary VPP is a great match for Calico and it's going from strength to strength. This is a new user space data plane option for Calico and using MemF offers a code path which can handle the incredible performance levels that we've learned to expect from modern apps. VPP complements Calico's workload protection with incredible wire guard performance and it lets you stay ahead of the curve by offering advanced support for additional features. VPP and Calico are pushing forward and achieving great results. Currently the project is expected to move from tech preview to beta status in version 3.22 which may well already be live by the time you see this. So if you'd like to stay up to date on this project don't hesitate to join the VPP channel in the Calico user slack. We publish our releases there and if you'd like to try it out head over to Calico documentation which has set up instructions. If you have any questions at this point or any later point don't hesitate to ping us on the Slack channel as well or you can ask them straight away. Thanks for listening. Okay well great that was a fantastic action packed presentation. You guys packed a lot of information into a short amount of time. Thanks for doing that because that actually left us with some time for questions so I see we have a few already and I'm sure more will come in so let's jump to those. So Ray had asked how robust is VPP Calico compared to say Calico with Linux deployment and how well battle tested is it? Yeah so I'll take that. Nathan and I were talking offline a little bit about this. The IP tables data plane has been in production for some years and there's in a huge number of deployments and we're really proud of that. So it's reached a real level of stability and maturity. The VPP data plane although it's exciting and the features are there it's not to the same degree yet. It's you know we're still moving into beta. So it's an exciting time and the features are there. It's good time to be involved but the stability is not yet comparable to where the IP table is I think it's fair to say. And you know we talked a little bit before there was a great talk last year about this integration and it's really it's great to see the progress that continued progress here so I'm imagining next year we're going to hear about how much better it is now. Maybe we can target DA at some point. We'll be right next. Definitely. So there were some setup steps and YAML files that were mentioned. Are those open source and available for people to find and use? Yeah I'll take that one too. I've shared a link in the docs sorry in the written chat. If there's anything that's missing from there I would love to personally hear about that. I'm in the Calico user slack and now's a great time to get involved. So if what I've shared is there and it's meeting your needs then fantastic. If it's not then get in touch with me and I'll make sure that we can improve things. I can also try to open source most of the configuration we use for tests. For example for Envoy, for MFs, there is a test directory on the Calico user repo that we use where we typically version the YAML that we use so that things are predictable and easily shareable. So we had another question here from Ray. Do vanilla Linux applications really do they work and do they benefit from this configuration? I can take that one. So typically if you run without the specific MMAVCR integration, normal applications should be just like in a regular community cluster and they should benefit from the speedup that you provide. Obviously not all workloads will see the same kind of feedback but for example if you're doing just regular IPerf and for example encrypting it, you can still see quite some improvement compared to the encryption that made in X for example. So regular applications should keep working the right way and regarding some, so obviously if you're limited by your NIC, you will still be limited by your NIC but in configuration where the next data plan is the bottleneck, you could see improvements. Again, I'm kind of thinking along those lines are there particular workloads or use cases where the memory interface versus say the host stack or wherever really the disintegration shines and conversely are there applications or things you can think of where VBPs just probably not the best and you should really use a different data plan with Calico. So we try to target a few use cases. So the first one we thought about was encryption because that's the most, so if you need to encrypt all the traffic between different nodes or do really high speed encryption, VBP really helps because typical NIC performance will be quite low. So it's easier, so the gap shows itself a bit easier. Another use case that will shine would be the, so if you need to send a lot of packets per second, so typically small packets that you need to handle, so for example doing proxies or maybe DNS responder or something like that, that's also a place where typical NIC interface would be limited around a million packets per second. So if you need to go about that, you would really show the performance benefits from the memory. And the last one, so VCL is a bit, has a bit the same characteristic, one of the place where it shines also is doing encryption if you do TLS or TCP, even it's things up a bit also. But I'd say really the main two use cases that we are targeting are encryptions and small packets in terms of frequent row performance. Okay, great, thanks. In the talk you mentioned these multi networks and Kubernetes, can you tell us a bit more, what do you mean by that? So that's something we're exploring. So one of the good points that, so Chris mentioned that we are still not battle tested in, so we can still play a bit with the data plane and add new features and train new things. And so we have been exploring adding a couple of features. So we played with Maglev for example, we played, so we got some more contribution about supporting services as an optional transport for example, and one of the things we were exploring was is it possible to expose several interfaces in a single pod and expose some kind of the Kubernetes abstraction in addition to, so for example if you run with multist you would get multiple interfaces in a pod, but the next ones won't have any particular magic done to them. And the question we were asking ourselves is would it be possible to somehow extend the Kubernetes Edge to a signal- ups huh?
Kubernetes is great, containers are lightweight & disposable, networking is simple yet powerful. But when it comes to network oriented applications, oh that can be slow! That's how the Calico/VPP integration first came up, as a way to address performance bottlenecks, making VPP's performance the motor of Calico's functionalities in Kubernetes. It speeds up container networking, but also allows us to expose even faster functionalities directly to the applications. So with this in place, how can we go even faster, while still preserving the Kubernetes abstractions ? We'll present how applications can leverage userspace interfaces, what this allows regarding network performance & additional functionalities and how the Calico/VPP integration makes this happen under the hood.
10.5446/56962 (DOI)
Hello, my name is Luca D'Eri and today I'm going to talk about network traffic classification for cybersecurity and monitoring. Before I start, I want to tell you a little bit more about me. I am the founder of Ntop, a company that develops open-source tools for network security and visibility. You probably know Ntop NG and probably also NDPI because I presented this last year at FOSDEM. I am the author of various other open-source tools and contributors to other tools such as Wireshark. Finally, I teach at the University of Pisa as lecturer. Last year at FOSDEM I have presented NDPI. This year I want to extend my presentation adding new features that are present in NDPI but are probably not known to everyone. In particular, I want to talk about network traffic analysis using NDPI. This is because on the market there are many tools and toolkits such as TPDK, PFRing, Netmap or if you want also EBDF that allows you to capture events or packets for the purpose of network traffic analysis. Unfortunately, most applications are still based on the top, bottom, and top that does certain activities. Whereas today the nature of the traffic is much more complex and we need to do something more than that. In order to avoid re-implementing the wheel many times, we have decided to put into NDPI additional features that are not purely deep packet inspection oriented that allow applications to sit on top of it to analyze traffic. Please note that you can use NDPI on top of TPDK. You don't need to put all the N-top open source PFRing stack on top of it. And also remember that this library is designed for speed. So this means that we have tried to optimize the library as much as possible and to overcome limitations of typical solutions that are based on Python and R that can do similar thing but only in post processing because they're not fast enough or they require too many resources. Just to recap what is NDPI, it is a toolkit that was primarily designed for learning about network traffic protocols and reporting what is the application protocol behind a certain network communication. Today we are going to talk about network traffic analysis. And I'm going to present you some examples of network traffic problems that can be solved with NDPI. The first problem is string searching. In traffic sometimes we have to search specific strings not just because we want to search in the payload a certain word but also because we need to match the traffic against a certain criteria. A typical example is substring matching that is implemented in NDPI by the implementation of the how-coded algorithm. Substring matching is necessary whenever you want for instance to match a certain domain name against a dictionary. So let's say you imagine that you have a list of black in a blacklist, a list of blacklisted hosts, a list of domain names that are not nice to contact, a list of pammers and many things like that. So we're talking about strings. You want to do this matching. The matching has to be substring because when you have a domain name you don't have to write the whole host name or only a subset of it. How-coded is an efficient string searching algorithm that is pretty efficient. Unfortunately, how-coded is a little bit complicated to implement because it requires the implementation of an automata. In essence a state machine and an effort if you want where we represent inside it all the possible nodes with the possible words of the dictionary so that whenever we have a string to match the algorithm search inside this automata and trying to find the best match if any. As you can see in the picture taken from Wikipedia you have two types of nodes, the blue and the gray one. The blue nodes are terminal ones so those that basically contain the match and the gray ones are those that are used to build the tree. I don't want to go into the algorithm because we don't have much time but I just want to describe how we can use it. In essence the first thing to do you have to initialize an automata with this NDPI into this automata and you have to add all the possible words to it. In this case it's a simple hello and word and then you have to finalize the automata. In fact the main problem of how-coded is that whenever you want to add a word you have to rebuild the automata the same happens if you want to remove it. So make sure that you have all the possible words otherwise you need to start over with another automata and do a hot swap in case your application is processing traffic live. Then at the end you see NDPI match string that allows you to check if inside this sentence there is at least one word matching the dictionary. And of course NDPI returns such a string. We have optimized the algorithm for networking so therefore we can find strings that ends with a certain suffix or also we can have strings that begin with a certain suffix. In essence everything you can expect for matching a domain name is present into this library even though you can also use it for matching pure strings. Just to give you an idea the memory used by the algorithm to create the dictionary is increasing with the number of words. And as you can see when you have about half a million words that is half of the Alexa top of a million hosts the size is about 900 megabytes. The build time is also increasing with the number of words. We have run this test on a very slow dual core machine just to give an idea of the speed. So with half a million words it takes about 7 seconds on a dual core into 3.2 gigahertz. But the nice thing is that if you have to search as you can see the search time is more or less linear regardless of the number of strings to search. It's just the memory that is causing a little bit more also to build the automata. The second problem I want to show you that you can solve with NDPI is IP matching. In this case we need to find an IP address on a tree that is typical whenever you need to match several network practices with IP addresses. That again is typical if you have a list of blacklists at all, a list of spammers, a list of network ranges that are not nice to contact just to give you some examples. A radix tree is the base for this algorithm. In essence it's a tree where we have in each node a single vector. So in this case it's cats, cat, c-a-t-s, cat, c-a-t, and so on. Whenever a node is terminal node, so it's a match, it is basically designed with this yellow color. Whereas if the node is intermediate, so it's used to be the tree, it's blue. This is the radix tree. Now the radix tree is important because we need to match a certain prefix. And matching a prefix is very important because a network in essence is an IP address that starts that belongs to a certain network IP. So this is why we are interested in that. And the performance is good. It's audit all w, where w is the length of the string to be inserted. But here we are talking about IP addresses. So how can we turn a radix tree into something meaningful for us? Simple. We start optimizing. So we start collapsing the nodes that contain words that can be, you know, collapsed together. So in this case, c-a can be collapsed into a single node. So we move from a try to a radix tree, where radix means something in common. And as you can see, this is a data structure that is naturally ordered. Therefore, if you navigate it, then you will have the results in a specific order. Now in 1968, the morison has created a special version of the radix tree called Patricia. In a Patricia tree, basically we have nodes that instead of being letters, they are numbers. And it's pretty efficient for subject matching in both IPv4 and IPv6. You can also do partial searches so that whenever you have to match, it's your network range. So a slash 24 or a slash 32 in the case of IPv4. You can support, as I said, both IPv4 and IPv6. In NDPI, what you have to do is the following. First of all, you have to create the Patricia tree. You have to specify the number of bits you are going to use. 32 for IPv4 or 128 for IPv6. And then you start adding nodes, in this case with NDPI Patricia lookup. And then you can do NDPI Patricia search best, because we always try to find the best match. That is usually what you want to do with networks. Along with the fact that you have or don't have a match, you can buy some metadata. For this, you can buy some information about the network itself. If it's a good network, it's a blacklist network, it's a network of spammers. So anything you can have in mind, you can put, you can add to it. And in terms of performance on the same machine, I showed you before, you can have a Patricia tree built in less than a second. You could buy some about 17 megabytes with 76,000 prefixes, that is quite a lot. And as you can see, the search again is under one microsecond, so it's pretty fast. Again, the importance here in the speed, because we want to use NDPI light. Another typical problem we have to address is probabilistic counting. So this means that whenever we need to know how many, whatever, we need to allocate a data structure that is usually hash table. So for instance, if you want to say how many hosts are my hosts, contact. So you have to keep a list or a hash table of these values. Or if you want to know how many different countries are certain hosts as contact. A typical question that allows you to answer the question, am I doing something local or global remote? So for instance, with Skype, you're going to count half of the word with other typical protocols such as HTTP or TLS, you're going to stay usually in a certain geographical area. So in order to answer these questions, the simplest thing to do is to use a generic data structure. But unfortunately, those data structures take a lot of memory. In particular, if you have a lot of data. So if you are unfortunate, if you have a host scanner or a network scanner, you will end up using a lot of memory. That's why we use probabilistic data structure. It means that there's a structure that is not perfect. They introduce a little bit of error. But in return, you will use much less memory and you will have a lot of speed. The one I'm going to present is called Hyperloglog that has been created by Fajol some years ago. And it's a probabilistic data structure. So it gives you an idea of the cardinality of a set. Again, you can have for instance, for a host, you can put the number of... So the other is also as contact, the number of countries, things like that. And the memory that you are going to use here depends on the error that you're going to accept. I'll show you an example. Suppose I want to create two data structures, one for counting the number of different hosts that have been contacted, the one for counting the number of different countries that have been contacted. When I initialize the data structure, I have to specify an i value. And the i value here in this table shows you, first of all, the memory that is used for a certain i and the error that you are going to expect when you want to know the cardinality. In my case, I use an 8. So it means that with 256 bytes, I'm able to have an error of about 6.5%. That is pretty good. Because if I want to know scanners, I don't really need to be super perfect. I just need to know the amount of hosts or domains or whatever that have been contacted. But for me, it's pretty good. So with 256 bytes, I can do exactly that and calling a NDPI hyperlabelog count to have the result. Another typical problem is anomaly detection. That is basically whenever we want to understand if there is something that deviates from our expectation. So in this picture, you can see some people are not worth with the right red color as others. This is our goal. We want to do it for two main reasons. First of all, because we want to clean data. So if we find out layers, if we find data that is a little bit unusual, it might be an error in measurement. Otherwise, because we find a problem. So the reasons are manifold. Now I'm going to explain you how we can do that with NDPI. We usually do this with time series. Time series is another set of data points. I think you're pretty familiar with those. If you abuse Grafana, if you play with networking data, influx to be this type of things. And once you have a time series, in essence, you have the data. But here we have to introduce two new words. One is called observation. So it means that the value that we really read from the network. The other one is forecast. It's the value that we expect at the next iteration to have. So for instance, now, if I want to predict the next value of the traffic of my network in one minute, this is going to be the forecast. Okay, when the minute is passed, I'm able to do the observation to read the real value. The discrepancy between forecast and observation squared is called SIC. Sum of squared error that is used to understand how far is the prediction from the reality. And how do we use it? Suppose to look at this picture, okay, the series is the blue one. The green one is the prediction. So it's a way for us to mimic the real series, again, in the future, okay? Because we predict the value and as soon as we have the value, we compare the real value with the prediction. In our algorithm, we have the ability of creating two bands, one low and one high. And we say if the series falls inside between the low and high band, then we are good. If it falls outside, then we have an anomaly. Very simple. I don't want to make it too complicated because there is a lot of mathematics behind it. There are three algorithms for doing that. The first algorithm called single exponential smoothing takes into account only the value. So it gives a weight to the value we read. In double exponential smoothing, we give also a value to the trend. So, for instance, if you want to give an extra bonus to the fact that the value is increasing or decreasing. In the third case, if you have a signal that repeats over time, that is called season, then we can imagine to predict the future at the correction factor based on the seasonality. For instance, if you have a traffic of a host that during the night is low and during the day is high, you can speculate about the future traffic that will follow this same pattern. So these are the three algorithms. The last one is called all winters. In essence, we have three values, smoothing factors are called just to give an estimate of the value alpha, beta and gamma. Now, in NDPI, we have implemented all the three algorithms and you can decide based on the nature of your data if you want to use the first two or the third one. The first two don't take into account the seasonality, so if you have a seasonality, you're obliged to use the last one basically. Otherwise, you can use the first two. As you can see, we need the value of alpha and beta into the algorithm. So in order to do that, either you use a average values or something that you believe it makes sense, or otherwise you do something called fitting. So in essence, we provide the two functions that allow you to predict those values based on the past. I will show you how it works. In essence, you allocate in this case a double exponential smoothing data structure. And instead of the structure, we continuously add the value that we read from the field. And then we add a value, we receive back from this NDPI add value a prediction and a confidence band. So it means that we give you back to you as a user the value and the boundaries up and low that we expect to see. If your value is within the boundaries, then we are good. If it's false outside, we have an anomaly. If you have some measurement from the past, you can fill this algorithm with whatever values you want. And then at the end, you can call NDPI test fitting and we return you the best alpha and beta value for the past. So it means if your signals stay similar to what we have seen before, these are the best two values that you can use for the future for predicting the future. In essence, with NDPI, you can have something like this. At the beginning, you see learning because the algorithm is still trying to learn how it works this signal. And then at some point, we start operating. Okay, okay, okay. That's a point you have an anomaly because the value that we read, 173, is for the outside the confidence band. Actually, it is lower. In this case, the second reading is too high. The last thing I want to talk today is beaming. Data beaming is a technique that allows you to split the value into something called beans. So in essence, a vector of numbers. A typical example for instance is packet length. We are used to split packet length into beans of up to 64 bytes, from 64 to 128, 128 to 256 and so on. So this way, we don't have to keep all the individual values, but we can keep ranges. This is the goal of the beam. And we can use it for formatting. For instance, if we want to compare two host time series, in essence, I can consider the point of the time series as a value of a beam. Okay, if I want to see if two connections start with the same packet length, I can use this as a beam or the length of the packet as a signal before. I want to show you an example. This is code that is present inside the NDPI as an example. Suppose that we have two or more time series. Suppose that we have the time series of many hosts of our network. I want to see when two hosts are similar. So I would like to know when two hosts behave the same way from the net of standpoint. So thanks to this, there is this example, RRD, RRD is an archive for time series files. You take some RRDs of hosts generated by N-top-NG, by other tools that allow you to pull from SNMP, it doesn't really matter. And then you give it to this tool. In essence, this tool is trying to compare those beams and to find those beams that are similar. Here you see NDPI being similar. And this allows you to find hosts that behave more or less the same way, and others that are very different. For instance, we have applied this technique inside N-top-NG to find from SNMP those net of ports that produce a similar value, so that in case of an attack, for instance, behave the same way, or ports that are supposed to behave in a certain fashion, you can find if they are similar or not, just to give you an idea. You have many, many ways of using that. And don't forget that this algorithm is super fast, because with over 10,000 hosts, we are able to compare reading the values. And doing the match, and all this in less than a second. So NDPI, as I said, is designed for speed. We have many more features, but I don't have the time to describe all of them. For instance, we have streaming data analysis. We have clustering, also called unsupervised machine learning. We have other functions for high-speed JSON serialization, Jeter entropy, and so on. But I think you need to read the source code, because we are running out of time. The last thing is the following. We have been listed to the world by Google because of our work in this field, and we would like to use this money to invest in the community and the development. So if there are people interested to work in this field, and being paid for developing open source software, contact us. Thank you very much for being here today, and I encourage you to download NDPI and to play with it. If you want, you can contact me at any time. Thank you very much. Showtime. Yeah. Thank you, Luca, for your talk. It was a very interesting topic. So I got a few questions. How about contributing to NDPI and Google Play and some stuff? Yes. We would like to encourage people to contribute to it, because we see that there is a need to help in various areas, and we are looking at a lot of things that are not only in particular cybersecurity for dissecting protocols, not just to understand what is the application protocol behind it, but to understand the behavior of host. So we have recently been awarded by Google with this prize, and the idea is to use this money to pay people, to pay students, to pay contributors. It doesn't have to be a job. It has to be a contribution to the open source. It has to be a value in what we are doing, and we are open to help us with new algorithms, new protocols, new implementation, or new ideas. Please feel free to contact us, and we would like to be in touch with you and to understand what are your ideas, and of course, to enroll you with this project. Yeah, for sure. Next question is, what other open source projects is NDPI being used by? Well, I know that for instance, it is embedded in OpenWRT as a package, and there are many people that are using it inside small devices for blocking traffic using IP tables. This is a typical example. I see that there are people that are using it to classify traffic and therefore to generate, let's say, data for machine learning algorithm. So it is used mostly to generate an input to other applications, and what we said today is that we would like to also sponsor the fact that the library is also offering an API for traffic analysis, so that if you need to create applications, just take an IPPDK, IPF ring, or anything for package capture, and then put NDPI for traffic analysis and focus on what you have to do. So this is another opportunity for people. So the open source, I think, should not be limited to the application or the company should be able to do the traffic analysis. So you are also asking what line rate is supported by NDPI, or is it more likely to be limited by capture methods? Well, NDPI is a library that allows you to set the traffic and to generate the metadata. It is written in C. So in our idea, I mean, it can help you to simplify the design of your application, because in essence, you delegate to an existing component, you know, the analysis. So I don't know if this is helping with answering the question. Yeah. Is NTPG based on NDPI, or what is the relationship between these two open source projects? Yeah, let's say that inside NTP we use NDPI as a layer for everything, okay, because like I said, we have delegated to it many of the features that you have to implement. So NTPG is based on NDPI, but there are also other tools. Yeah, Pima just made a good suggestion on the chat saying that the Dhanos, that it's based also on DPDK is using. So let's say it implements something that, you know, every monitoring application, or let's say traffic processing application doesn't have just to root traffic or to be simply limited at the layer to do that. Another example I forgot is OpenVswitch, because in OpenVswitch there is also somebody who moved NDPI into it, so that it's possible for instance to control the traffic between, you know, peers that are talking to each other and limit it then to select the protocol. So for instance, you can say, you know, allow SSH to pass or block Netflix, this type of communications. Yeah, you already mentioned that NDPI is written in C. Are there, what language bindings are available for NDPI? Okay, there is also a binding for Python, so you can use a Python, but the Python binding is using NDPI and is extending it also in the context of machine learning for understanding the traffic. So Python is definitely one of them. And there is also somebody who ported the Go, so you can also use it from Go. And being a, you know, a pure C library, I don't think it's difficult also to bring it to other, you know, languages such as Rust, for instance, but we didn't do it, but I believe it should be pretty simple. Because we try to be self-contained, namely that, you know, for instance, packet capture is not part of NDPI, simply because we want you to use this library on top of, for instance, PDK or NetMap or anything. It doesn't have to be also bound to, for instance, to pick up. So this means that the library is pretty portable and doesn't bring any dependency with it besides, you know, the basic, you know, the libc thing. So it should be pretty easy to move it to other languages. Yeah, okay. Just a last question from my side. NDPI, we shouldn't, yeah, okay. NDPI, we shouldn't look at it as just as protocol, so we can do much more, I guess. Yes, yes, that is the idea, that's the idea of the talk today. We can use it also for processing traffic. So, namely, even if you don't need it at all, you know, the packet is factory, you can use NDPI, for instance, for determining who is the top talker, or what are the hosts that are contacting many other hosts, you know, typical questions that you have in cybersecurity in general, when you have to analyze network traffic. Yes. And the last one, changes are from creator encryption, so we can do more encryption now than ever, I guess. Yes, encryption is creating troubles to DPI because we cannot inspect the payload anymore, but it's also offering opportunities, simply because when you talk with, you know, plain text protocol, there is no real fingerprint. I mean, the fingerprint is probably inside a protocol such as HTTP when you have to.
Security and monitoring applications need to classify traffic in order to identify applications protocols, misuses, similarities, communications patterns not easily identifiable by hand. nDPI is a library that implements various algorithms for traffic analysis able to detect outliers, anomalies, traffic clusters, behavioural changes efficiently in streaming (i.e. while traffic is flowing). Goal of this presentation is to show how nDPI can be used in real life to inspect network traffic and spot patterns worth to be analysed in detail. Modern network security and monitoring applications need to analyse traffic efficiently in streaming fashion (i.e. while traffic is flowing). This is in order to detect interesting traffic patterns in realtime without dumping data on a database and performing computationally expensive queries in batches. Many network developers do not have skills for efficiently analyse traffic, and data scientists often do not have skills to understand the complex nature of network traffic. For this reason nDPI, a popular open-source deep packet inspection library, has been enhanced with various algorithms and techniques that dramatically simplify traffic analysis and that should ease the creation of applications able to efficiently spot traffic patterns and anomalies. This talk will introduce some of these algorithms present in nDPI and show how they can be used in real-life at high-speed, contrary to many applications that are inefficient and often based on languages (e.g. Python and R) that are not designed to analyse traffic in streaming at 10 Gbit+ on commodity hardware.
10.5446/56967 (DOI)
Hi there, I'm Scott. Hello, I'm Vin. And what are we doing today, Vin? What are we doing today, Scott? We're going to make the game. What are we using to make the game? Control. And what program? Inu. And what programming language is Inu written in? Num. And what is? Mostly control. And what is the name of the game that we're making today? Potato Zombies. And what is a Potato Zombie? A Potatoes Feet. Alright, so what are you going to make now? Is that the potato you're going to make? Why are you jumping so much? Why don't you fly? No. I'm... You're making me seasick. You're not even in a sea. Okay, it's making me chair sick. Inu sick. Inu sick. It's like a root beer. It's ginger ale. Cockle. Hello there. So are you going to do anything or just jump around? Anyway. Put that into the fun. Should we make it walk? Alright, go into code mode. Click on it. Oh shoot, I didn't think it was. This is a problem. Alright, so go into code mode and click on the robot. Okay, now try it. Dale. Okay, click on her. Alright, so now... Click on whatever it is. You also teach them how to write some things. Or we can do a loop. Make it fast. Make it fast. Make it fast. Make it fast? Yes. 100 speed. 100 speed? Yeah. Alright, so what do you want to do now? Alright, make yourself a... Underground. Yes. And then hold it into potato tail. A newborn robot. Connect the potato to it. Okay, so you need to make a potato. So you are on potato duty. Alright, potato duty. Dale. That's the potato. Alright. So name potato. Potato baby. Why do you talk about potatoes? This destroys everything I know about email. Alright, so now what? Make... No, you go. Make a house. And I really thought it was this like my dad. You are a pretty good artist. For you know art. The most of the creation you know. This. I was thinking pretty good. It's a weird floor though. I wish my house didn't have a whole bunch of holes in the floor. What? No, it's a staircase. Oh really? Okay. Yeah. Oh, it is a staircase. There's the deck. I like it. And you know a no roof. Who needs a roof? It doesn't rain in me anyway. Are you trying to build onto the potato? What are you doing? The potato. Pretty nice house. That's pretty nice. Definitely better than this one. Yeah, both nice. I like the deck on that one. There's also a rail. Sure. I'm using red block for all of them because red is my favorite tool. Hi robots. The potato is only a tiny creep in the out. The lighting is on the stuff right now. I'm not sure what happened. I didn't think I changed anything but definitely everything is really super saturated. There, the rail. That's a really nice wall actually. I just want to add a flag to the city. And I'm going to be... Green at this side, red at this side, and blue at this side, with red in the middle. That's a nice flag. Yes. Hello. Right there, right here, this is a potato. Right there. That makes sense. Potato zombies would probably have a potato on their flag. That seems appropriate. This is a nice city so far. I'm going to add an ancient ruined. Alright. What are we actually going to do in the game? You're building stuff but what's the point of having one? You're fighting to save potato zombies from a dragon so you have to put it on the town hall. And on top of it you have to fight the dragon. Alright. So are you going to build the dragon too? Yes, but first... And so the dragon is going to chase the potato zombies? No, no, no. This build... I don't have fire to build from the dragon. You're going to have fire come out of the dragon? No, I don't have fire to build to save the dragon. Oh, I see. But how does the dragon get the potato zombies? And then how do you save them from... You have to jump on the dragon three times by moving to the town hall with a sealant. Okay. The freight that ancient ruined... Now let's also add... So are you going to need to jump off things or will you just jump super high because that looks pretty good there, the brown. I guess it depends on what angle you're at. It looks like brown from this angle. Yeah. So the dragon is going to chase after the zombies? No, no. It will just be the reason why there are fires in the city. It won't actually save the children in the game. Okay. So is there any way the potato zombies can actually get hurt then? No, no, there will just be higher in houses. Okay. I've never heard of that thing. You've never heard of Dragon Quest? Yes, this will just be a thing on the ground. So there's no real way to win or lose them, right? Just when you jump on... So the dragon is going to just be flying around? Yeah, it's on top of the town hall. On top of the town hall? Yeah, I haven't made yet. Alright. And then... Look how weird it looks at this angle. It does. That's neat. Yeah. So you jump on the dragon. And how does it move? Is it just moving randomly? Is it chasing? It's just moving randomly. Alright. And does it have any way to hurt you? No. Flames! And you don't want to throw up the fire to this area. You have to climb the fire? Yeah, because somehow this fire doesn't hurt you. Yeah, that's true. And it's hard to make fire in the evening that hurts you. Yeah, and you need to go through this parkour course. And so the potato zombies are just in the houses? Are they walking around the city? They're walking around with the dragon, they didn't see them. So that's why he's not following them. I should make this a bit more easier. I think you might need to make the things a little better. Why don't you do like a second? It's gonna be really hard. This seems good. Wait. Can I try to do your parkour course once you're done building it? Yeah, yeah. So... So you don't want to make any of the towers turn around or anything like that? You should make... why don't we have like a thing that goes back and forth? Why don't we have this one that goes back and forth? You want this one to go back and forth? No, this one. Alright. Oops. So it's gonna go back and forth. So it will be kind of tricky. This parkour course is supposed to be hard. Is that what you want? Yeah, yeah. And now... I'm gonna add one of the whole one right here. And it's gonna be connected with the bridge. There it will. And I'm gonna add a little thing here so it will make sense. Because it will be weird there. Make more sense now. Beautiful. Green light flag. I don't know why we go through the US so much to relive in Canada. You haven't gone to the US in like four years. Yeah, but we... And also ask for the Reholder's Parkour Diary. An alligator? Yes. Because the land of our Gators. Somehow. It's actually specifically called Gatorland so it is literally the land of alligators. Dale. It was an old PC game called Corncob 3D and he flew a Corsair and you fought aliens. And you could jump like really, really high and I was like, it's really annoying. Why don't I stop you from the maze? Oh, it's a different day. Oh, it's now a different day finally. Yes, finally. And you're still six, right? I'm 45. Beautiful. It does look pretty good. It's a cool world. The city has stood. There should be more houses for us. Fine. First let's make the city a bit bigger. Alright, you can add a couple houses and then we're going to do the potatoes for the potatoes. Is it recording? Yeah. Do you know what this time looks like? A mushroom. It's also a time to look like a toad head. Right, Dad? Yeah, because a toad head looks like a mushroom. Except for the mushrooms don't have faces. Yeah, and they need a fire to cook food. What do they eat? Potatoes. No, that seems wrong. The potatoes are really nice. I guess that's... Potatoes, zombies, they'll eat potatoes. I suppose that makes sense. I like that the flats are here. Right, Dad? The flats zone. Pretty nice house. Also the fourth one is a bed. Alright, we're going to make some potato zombies. Can you go make a potato? Wait. And the salt ray is an elevator. So how do you want the elevator to work? Is it just always going to go up and down? Or do you want it to go up and down? There's a little up, left, south, run on, and down. And the cylinders up, move, go down, down. Okay. And, um... Of course, my window for the flat is also an elevator. That actually seems like dangerous. Right, Dad? I guess so, yeah. Alright. This will be the first code thing. Alright, so code it up. If I saw it go up, but run on the sun's door, off it, it would go down. Having the flag on top of the elevator seems dangerous. Yeah, let's just see. Can I try it first? I just... I didn't go high enough. Where are we? What the heck? What the... Oh, no. Uh-oh. Add some walls. This will take some information on this elevator. This is a ride elevator. But why did you even decide to make your own game? Why did I decide to make Yinu? Yeah. Uh, partly because in the coding club I couldn't... I didn't like any of the coding tools. And also, I don't know, just like a fun project. Whoa. What? Hey. So is that the elevator operator? Yeah. No, this is the... This is using the elevator. I am not totally sure if this potato is going to go up with the elevator. I think it probably won't. The elevator is going to go up and the potatoes on it. We're just going to stay right where it is. And we'll go back down six. And we'll go from... Up to down. Up, up to down. Alright, let's see what happens. So, I'm going to go from here. Whoa. What? Oh, I guess that's right. Alright, what do you make of now? We almost at the town hall. The green represents freedom, blue represents water, and the potato represents a potato zombie. So where do you want it to go? And then we'll make it go... I want it to go up that block. Alright, and now what do you want it to do? You want it to go up and down? Yeah. Just forever? Yes, up and down forever. Of course we will have to think this and take their speed. It was an attempt at past this. How much do you want it to go up each time? Like three or four or five? Three, four. Alright. Only five. Um... Is that what you want? Yes. So that'll make it so it's a bounce. So it's the bounce rate you want. At least there's more code in this time. Right, Dad? Yeah. What did I say? How high would it be on the bounce? Five, ten, three, one. Twenty. I think that's going to go pretty high. And now? Oh, God. Uh-oh. Are we ever going to come back? We're going to have to change the number of times we have to. Yeah, we have to. I don't know if you two are. Can we change? Change it to ten. Ten's still pretty far. I don't even know what it means, like what ten means. Ten is still going to be a lot. At least not as much. Not too bad. Should we change to something else? We're going to change it to five. This seems, that was better. We finally up to the dragon. We're going to make the dragon here? Wait. Forth though, Dad, we need to add mail. Oh, the mail? Mm-hmm. We're going to add the mail. Now, all right, make it so, this one in the mail, it tells you to add the funky hat. Duh-duh-duh-duh. Duh-duh-duh-duh. All right. I'm going to do the fancy. I guess that's right. Man, it is really glowy. I think it's getting brighter. Is it getting brighter? We are moving closer to the sun. I didn't think we'd get brighter. That's a pretty impressive hat. All right, time to do the dragon. All right, so go find a separate spot and build the dragon. This will look like a duck. Oh, you're building it way up there. We can just build it on the ground and then make it go up. All right, this is fine too, I guess. It should look like a duck. Make sure they don't join if you accidentally touch the thing and they'll stick together. There's no way to split them apart yet. It really does look like a duck. So make it go left and right. And as you jump on it, you jump on it if it flies for one second. Why don't you make it go left and right? Remember, the loop is the o-key. So if you do that, press Enter. Enter. Enter this one. Return that one. Now, what letter does the left start with? L. All right, that's an I though. So hit Backspace, Delete. And try the L again. And Space. And how far do we want it to go, left or right? Both four. Both four, okay, so four. And now New Line. And now Right. What letter does the right start with? R. And now Space. And four. And then you can just press the Escape key to close this. Man, that does so slow. Can you please make it go a bit faster? I want to delete the fours. Okay, what do you want them to be? I will do it. So there's the cursor right there. Five. All right, and you can use the arrow keys to move it. And then this is to delete. And five. And then if you press this button, it'll reload your code, but it won't make your code go away. So now it's doing five. And then we have a escape. Escape? Is that one in the corner? Uh-oh. So we're going to start by going this way, and then we'll go back and forth. So go right. Watch how do you go right. And then what do we have after that? We need a space after. So you need to put a space between the two of them. This one is Space. Well, Space is the big one, but you need to do it. Now hit Space. And now what do we want? All right, now you can press Escape. And this one. Yeah. There. Is that a weird duck? It looks like a duck. I guess. But I can just look a little bit like this. It's the wrong color to be a duck. I want the duck. So what do we want to happen? I want a duck. I want it to flap its ass, and it's the head sweet. And then it will disappear to the world reload. So we'll just call what it's doing right now, flying. And it's going to do what you said. So we'll go left, five, and right, five, and just... If we're on it, then we're going to go from fly to... Hit. This is going to be hard to hit. Can I slow it down? Or do we want it to be fast like this? Fast. Okay. Be to discipline to be hard. Okay. When will it disappear? Well, we didn't add the disappearing code yet. I can't make it disappear right now. Sorry. I can't be able to, but I can make it fall to the floor though. Until it reloads. Until it reloads, yeah. All right. Do you want to try it? It's not great. It died. This is a huge bow. That's a bow? Can I try something? Yes. Are you naming it bow? No. Yes. And dad? Mm-hmm. If... You might not be able to build it there. I'm going to add another one to the bug list. You're going to fix all these bugs, right, fam? No. These are the graves. That's the graves, son. And they'll have the original noon-tooth on them. Are we just calling it name? Grave? Yeah. Gravestone. Gravestone. Of noon-tooth. How do you spell noon-tooth? I think like that. So let's make some potato zombies. So let's go... So the grave is pretty important. We touch the potato zombie tartar. Yeah. We touch noon-tooth with the answer stores. So... Alright, so let's make... Let's just start with a... Let's say... This is not very convenient. So we've got a potato. Alright, and we're going to give it a name. Name, potato, zombie. Alright. And so what do potato zombies do? They wander around and for noon-tooth, there's gravestones everywhere. And do they... Like, if you get close to them, do they run away? Or do they come to you if you get close to them? They run away. We don't know their time for a gravestone there. Okay. So if they... So we'll have wander. And we'll have... Do we want to call it flea or runaway? Runaway. And... Oh. And we'll... Alright, so for wandering, we'll just be... We'll go forward one-to-five. And then we'll turn... So running away will be... So this will be run. We'll run when we're doing that. We'll walk when we're doing this. And... So we'll be with their ancestors during the stings. And Dad, do you want to hear how the batay started formed? Sure. There was a rain forest and while the rain dropped somehow, it turned into a tail and hit one of the new toots and it turned into a batay zombie. And then the tail turned into a... And then the tail turned into a batay zombie. So we start off here and then we switch to wander mode. But if the player is... If the player gets too near, then it switches to runaway mode. And by the player gets more than 25 meters away, it switches back to wander mode. And then when it's in wander mode, every time it goes through, so wander mode is just walks forward and then it turns. And every time it... There's a one in ten chance that it's going to drop one of the gravestone of Nun Took. Why'd you call that? Because that's what we wanted to call it. So now, can we do the wall really quickly? So... So we're going to go... So what do we... Make some variables. We'll make the length. How long do we want? Like 150. And how tall do we want it? 200. I don't know about that. 150. Let's start with 20 and then we can change it later. Okay. So we'll go... So height times. And then forward on time. So we're going to make a square. So we'll go forward length. Then we'll turn right. And then we'll go up. And that should be it. So it's starting to go all we want. And so we'll make it go faster. If we make the speed zero, it'll just happen instantly. So what do I... How tall do you want? I'm not going to make it. 150? We'll try it. I don't know. What's happened? Whoa! I mean, if they really want to protect themselves from the humans, that's a good job. All right. So now I think we should make some spawners for the potato zombies. Okay, so each one of these will make five potato zombies. I just wanted them to... They wander all over the place, but I wanted them to be spread out a little bit. So that'll make five potato zombies. So I mean there will be infinite potato zombies technically? What's that? Does that mean there will be infinite potato zombies technically? Well, each one of these only makes five. So right now there'll be like 16, the original, and five more. And do we want... You want it to make a few more spawners? Yeah. How many should this one make? One. No. Five. It's like every seven is my favorite number. Five, seven. Oops. All right. Whoa. Why don't they want to take the eggs in a house? Because the potato zombie must have gone into the house. It is really starting to get... Why does it possible to make it so they sometimes randomly enter a house and then leave after a few seconds? I mean they just walk around randomly, so sometimes they will go into a house. You could certainly code it to make them have more intelligence about that. Like you can tell... They can tell when they're near objects. So if you gave a house a name, then you could have code that said if you're near this house, then you know go to this position or whatever. Speed one. This is a weird thing. Oh! I wiped it. Yo! 45 seconds! All right. Thank you very much. That was E.N.U. on Scott. This is... Vincent. And you also count when we'll only be at the end of this video and maybe some other videos too. Thank you very much. Thank you. Sorry, I think we were muted. Can you hear us? Yes, we can now. Great. Well, thanks for watching our talk. It was a little all over the place, but did you have fun making it then? Yes. I think it's a little bit of a shame. It's actually adds to the charm. I'd have to say. Thanks. We, yeah, we had fun. He's made a few sequels already. Although mostly on... I also had some other stuff too. Wasn't in the video. For one day. For update. So. So we're going to release the source once 0.2 is out, which hopefully. There's going to be a release of some sort by the end of the month. I'm not sure if it's going to be in a fit like a release release, or if it's going to be a release candidate or a preview. Like I'm fairly confident I can have the code done by the end of the month, but then there's a lot of documentation work and things like that. That also needs to be done. So that might take a little bit longer. But once 0.2 is released, then we're going to release the code for potato zombies as well. On GitHub. Yeah, that would be great. And then you can probably not bother that much with the documentation at this point. Because you already have like a couple of great videos on Inu on YouTube, which are basically how I learned how to program with Inu and how we learn with our son, like with my son, and how to build those towers. We also have those examples, which are also great. Yeah, well, yeah, I fairly spoke with the other few examples. I'd like to have, if nothing else, some documentation around the new, so there's the new state machine stuff that lets you define behaviors and how things switch between behaviors and things. And so I'd like to get some documentation up around that at least. But I don't think, I'm hoping that the next release after this one, 0.3, is going to have some main game documentation. I think that's going to be a great inventory where you can put your creations. Well, a better inventory. A better inventory. So I'm hoping that the next release after this one, which probably won't be out until later in the year should, that's, that will be the release that I think will be usable for, for pretty much everybody right now. It's still mostly just for Finn and I to play with, but I definitely appreciate anybody else who takes the time to check it out. Okay, cool. I have a couple of technical questions. About you know, yeah, the first one was, yeah, I know there is a new, like the prototyping system, which lets you like build objects and name them and then instantiate them with new, where, however, how does, how does he know that this particular set of cubes are a single object. So are they just adjacent to each other? If they're like, you know, next to each other, they're the same object. It does have a sort of thing. So there's some intelligence. So that's the reason why video the, the ground sank into the earth, because there was a bug with how it detected adjacent objects. And it was counting deleted blocks as well as, as so there were some deleted blocks, but it saw that there used to be a block there and it joined them. So basically right now, and I need, I probably will need to tweak this, but if you're building something, if, if you build something and it touches something else, and the something else is the last, the previous thing you were working on, it will join those two objects together. So, and if you just, if you keep building on to something, then that's one object, but if you, if you, if two objects touch and, you know, one of them, need, if one of them doesn't have a script and it was the previous object, you were building on it, will join them. I don't know how well that's going to work in practice. I might, at some point, I might need to add just like a new object button or something like that. I'm trying to just try to be reasonably intuitive once the bugs are worked out. Okay, cool. And another question I had was, was about the controls. As far as I remember, there is no way to, to like use, you can't like play the game as yourself, right? You see, you have to create a character that has a script and they like run around and they do things for you, but it can actually like shoot and, you know, you can like use a W A A S D to run around and like left click to just shoot those zombies, right? Yeah, so again, this is just talking about future plans. So who knows what's actually going to happen, but the intention is that you'll be able to like do something that, you know, triggers a script, like, so you stand on a block or press a button or whatever. And then in that script, you can manipulate properties on the player character. So you could adjust the jump height or you could make it so you can not switch modes, like, you know, or you could add like a weapon or something like that. So, so you will be able to manipulate the player character to provide some, if you want to make a first person game, just to, you know, but right now, if you make a game and you know you can still at any point just drop blocks, which is not really something you'd want in a real game. So just just spawn a spawn a bullet, right? And just just define a behavior for it when it collides with an object. That's like a shot, right? Yeah. Yeah. And yeah, and you will. Right now it definitely is easier to do that with other characters. You can't really do it at all with the player character. Other than, I mean, you can interact with the player character a little bit like acting like where they are, but that's about the interaction you can do with right now. But in the future, you'll be able to change properties on the player character. Yeah, so the question was really about what, how you see that engine. Is it an engine for a first person game or is it intentionally like a setting where you manipulate other characters. So what I have five seconds for this ends.
Enu is a 3D live coding environment that can be used for education, exploration, and light game development. It uses a simple, logo-style Nim DSL, and aims to be accessible to as many people as possible, even those who may not yet be able to read or type. It's still fairly immature, but will eventually be suitable for implementing simple multiplayer 3D games. In this presentation, Enu's creator will walk through creating a simple 3D game with his 6 year old son using Enu 0.2.
10.5446/56968 (DOI)
Hello everyone and welcome to this presentation of the Coriolis ASIC design flow. The presentation is organized as follows. First I will remember the reasons that are behind or driving us to make Coriolis. Then I will present the basics of making an ASIC that is all ASICs are done, which will lead me to introduce the design flow with a special emphasis toward its software architecture. Then I will summarize the improvements that we made in 2021 and go to the plans for 2022. And finally I will present a small demonstration about the latest capabilities of Coriolis. One of the first reasons we started to develop Coriolis was to provide academics with tools independent from commercial software, which have closed source and with which you cannot exactly control what you do, or being trapped by backward capabilities problems. So with our tools you know exactly what's going on inside the source, and moreover if it doesn't exactly switch you, you can modify them. The second advantage is that with the software you can publish, share and modify the hardware design you have made. This avoids to have to reinvent the wheel every time you want something that has already been done by others, but cannot be published. We hope to build a free community with that approach exactly like the free software made for UNIX. But aside for those academics aspect, we also wanted Coriolis to be able to handle real situation designs, so small and medium enterprises will be able to make at low or no cost small or medium designs without having to pay the huge fees usually associated with commercial software. Of course by being open we should bolster the securities as I remind you, a secure software cannot run in an insecure hardware. If your hardware is not trusted, there is no way you can make a trusted software upon it, or at least that I'm aware of. Another aspect of this project is that it will help ensure the continued existence of the hardware. Typically there are some applications or domains when you want to be able to provide a design for a very long time. Typically imagine a nuclear power plant which has been qualified with a certain processor or certain kind of hardware. It will be only qualified for those hardware and not others. So you have to be able to provide those exact same hardware for the lifespan of the nuclear power plant which may be 50 or maybe even 60 years. So imagine a processor that has that kind of lifespan. As of today I don't know any one of them. So we wanted to start small and our first targets are the mature node. That is the node not lower than 130 nanometers because the problems to solve are less complex and we can make demonstrations and still have useful designs. I assume that most of you are familiar with the FPGA design flow and not with the ASIC design flow. So this figure helps to understand the difference. On the left hand side you have the FPGA design flow which starts by a logical synthesis. Then a place and route upon the FPGA matrix to targets that place and route generates a bit stream that you send inside the FPGA that you upload onto the FPGA and which configures it. Note that the FPGA is itself a kind of ASIC. It's a finished ASIC but it's an ASIC. On the other side, on the right side, you have the ASIC design flow which starts the same by a logical synthesis but on a slightly quite different structure which is the standard cells and gives a net list. Then we perform the physical synthesis which is called the place and route and gives us a layout. The layout is a drawing, just a big drawing and this is this drawing that we send to the funerary. The funerary is the factory that will translate, transform your design, your drawing, your layout into an ASIC and it will send back you that ASIC. There are two blocking points until recently in this design flow. The first point is the EDA tools. EDA tools exist but the commercial one comes with a very high license fee. For example, I heard that to benefit from the full flow of one major vendor, it's above a million euros per year. So that's quite as expensive for a small or medium business. The second blockage point is which is underlining in red here. It's the fact that the funerary protects their technological information, their trade secrets by NDA, non-disclosure agreements, that prevents you to share or publish everything you do with the design rules, with their private information. It completely prevents you to share your work. On the other hand, once fabricated, you have no restriction about the finished product which is the ASIC. So those two points are the key points that we want to solve. Let's have a closer look at the ASIC design flow. On the left-hand side of this slide, I put a drawing of a small, very small design so we can see the individual components. As I said, the design is just a drawing, but the drawing that has to respect a very complex set of design rules. That is drawing rules for modern technologies. This set of rules contains above a thousand rules. So how does that work? First, we have the RTL description of a design, which is basically a finite set machine, expressed in terms of registers and Boolean equations, complex Boolean equations. We use the logical synthesis to break it down in terms of little logical cells, which gives us a net list. The point of the logical cells called the standard cell library is that those little cells have a layout counterpart, that you can translate this small logical Boolean equation, which is a noun, into a design, an electrical design that will do the same function. When you have broken down your design, you have to reassemble it into a single coherent layout, which is done basically in two steps. The placement steps, which just place all those little pieces of layout inside an area, and then the routing step, which draws the wires that connect them, the electrical wires. And then you get, you got your complete layout, expressed in terms of GDS or AP file. This is basically a V-cycle into which the library, which is underlined in a light background, is the pivotal points. So you break down into the library, and then you assemble them. As I just said, the standard cell library is a key point of the design flow. The one we use has a long history. It starts with the S6-Lib 1, which has then been adapted by Naoshi Kochimizu from Tokai University, to better fit the deep submicron rules. It contains 89 cells, logic gate, depth lip flop, multiplexer, and so on. It is well validated, because we have built some design with it and checked that they work. But in the end, what we use now is another kind of portage, starting bas-aid on the topological structure of NS6-Lib, or S6-Lib, that has been made by Stav Veraghen from Chips-Foremaker, with its Flex-Lib library and the PDK maester. So it's an attempt to be portable across technology, but still getting close to the technology rules. So, and this is what we use through good old Coriolis project now. Here is a little bit more detailed view of the design flow, of the ISIC design flow. So, we want to emphasize in this slide the part that we are working on, which are the physical synthesis and the cell library, in cooperation with PDK maester and Chips-Foremaker. And the other part, we aggregate. So, the design flow starts with RTL generator, which can be anything that generates Verylog or VHDL. You have a MIGEN, TRISEL, Spinal HDL, System C, System Verylog, and maybe some other that I can forget. And this is a complex choice, because there are a lot of advantages to any of them, and these are advantages. So it's a trade-off. I won't go into the discussion about those kind of trade-offs, because it's almost a short discussion. So, we see that we can validate with VHDL. We have the logical synthesis, validation again with simulation, and we have to validate the layout with the DRC. We will go into that in more detail in the following slide. So, the right part, which is in a light-red background, is the part which is not publishable. That is, which is covered by the NDA. But fortunately, last year, Google released its SkyWater Free PDK Open Design Kit. So, fortunately, for that design kit, all this part can be published. Now, let's have a look to the logical synthesis part. As almost everyone out there, we are relying on YoSys to perform the logical synthesis. But as we are aggregating tools, it is a nightmare of file translation. For example, here, we start from VHDL, then we have to translate it into Verilog. Then YoSys can use it, generate a BlifFile as output, and we have then to translate the BlifFile into the Coriolis special subset of VHDL, which is called VST for VHDL Structural. Hopefully, this will improve in the future years and be more integrated and simplified. At the other end of the design flow, we found the validation. I mean that when you have placed and root your design, you will need to check that the layout that you have got is as error free as possible. It is not always sure that it will be, but at least you must perform all the verification possible. So there are three kinds of verification. Starting from the GDS file, which is your layout. You first perform a DRC or design rule checking. That means checking that all that you're drawing is completely correct. Respect all the rules dictated by the technological node. For example, that means that there is no wire too small or too close to each other. So that's basic verification. Then you need another tool, which is an extractor, which is currently and very recently we used OpenR6, which performs an extraction of the netlist. You compare the extracted netlist with the reference one, with an LVS tool, layout versus schematic, which means that the router has done its job correctly and the wire are exactly connecting the gates like they should. There is no open circuit or there is no disconnection and there is no short circuit. That is two wires crossing each other. So that's the second verification, which is very important. And the last one is the static timing analysis. It's based on the extracted netlist, but with a more fine, grained detail level, which is the transistor level. We use the gates inside the transistor and we combine them with the wire from the netlist and we check that all the timings are good, that you are really making all the timing constraints, meaning that the propagation time along the wire are all correct. And this is very important if you want to match the working frequency of your design and also check some other difficult problems. For that, we use the high-pass tool, which is also a tool developed at the Lipschitz Lab, which is old, but very reliable. Now let's have a look at the algorithm. At first glance, the algorithms are second-challenged, organized. First, you have the standard cell library, then the placer, based on a simple algorithm, then the global routing step, which uses a Dijkstra algorithm, very classic. And finally, the data-led router, which makes use of an unpublished academic algorithm, based on segment, rip-up and re-root. But each one of these steps, of course, is NP-complete. But they are not as second-challenged as they are. For example, one classical problem is the routing congestion. That is, the placer crammed the cells into two small areas, so the data-led router cannot draw all the wires without making overlap. So it must inform in some way that that area is overloaded, and the placer must insert more space for the router to complete. Basically, there are lots of constraints like that. This is what we really have in terms of interaction. And this has consequences in terms of data structure, upon which we construct our tools. So the key feature of Coriolis is the hurricane database through which all the tools communicate. And in terms of data structure, they make decorations, like illustrated here. The jcell is the hurricane database, and you have the data-led router or the global router, which makes decoration over it and communicate through them. Finally, all those considerations explain the shape of the doDesignScript.py, which manage how your design will be placed on route. As we have a Python interface, Descript is really a mix between Python part and C++ part. C++ part are used for computational intensive work or tasks, like placement or detailed routing. But other parts, like the clock tree generation here and some others, are only made in Python. In fact, there is not a real Coriolis binary. It's only made of Python scripts and library interfaces in Python. We also delegate as much as possible the task of managing the file system to Python, because it is already done and we don't want to reinvent the wheel by doing it inside Coriolis. We are also being busy making new chips. For example, the biggest one, the Librosoc, which is 120 kG, account for 1.3 million transistors, for an area of 28 mm2. It has been done in the TSMC 180 nanometers technology. And we hope to get it back, packaged, in a few weeks. Staff Veragen did also use our tools to make his own microcontroller chip, which contains a Motorola 68000 and a MOS 6502, including 4 kB of onboard RAM. The onboard RAM can be seen at the bottom right corner of the design. That are those four blanks area, because it did use directly the RAM supplied by TSMC for which we cannot display the design, the layout. So, as I said, it's done in TSMC 350 nanometers and amount for 2026 kG. The last chip was done by ChipFlow, and it contains a Minerva RISC-5 implementation by LambdaSoc. It was submitted to the MPW-4 Google program, which makes use of the SkyWater 180 nanometers. It makes use of the Caravelle Arnes, which is a way to encapsulate your design for measurements. And it accounts for about 57 kG. What are we going to do for next year? The first item is to improve the speed of the placer. The placer gives high-quality placement, but unfortunately, it seems more sensitive to the area of the placement than to the number of standard cells, which is quite strange. We will work on that. Then we will also work on density-driven placement, that is, start to introduce the loop, the retraction loop between the feedback loop, sorry, between the placement and the global routing. And at the same time, we will also interest ourselves to the timing-driven placement because they require similar functionalities or features. And in that same movement, we will also rewrite the netlist extractor and comparator, which are already present inside Coriolis. But unfortunately, their implementation is butchered, so we will rewrite them from scratch. And finally, we plan to release a new chip, which is a follow-up of the LibreSoc project, and which will be a small gigabit Ethernet router, and will be a much bigger design. And finally, let's have a little demonstration of what Coriolis can do for you. This example is not completely part of Alliance Check Toolkit yet, but it will be in the future. So, first, you see that we have some very low file, which describe our little design. In our case, it's the 6502, the most 6502. So we have the RTL representation of our design. We also have the little doDesign script, which will tell Coriolis what to do, how to place and route your design. There is some instruction inside, but not too much to customize by yourself. Most of them can be copied from one reference script to another. So it's very simple. And finally, a little Mac file, which is here again mostly generic, into which you have just to indicate what is your top Mac file. So, as I said, it is mostly done by Mac file. So the first step is to generate the, is to call YoSys and perform the logical synthesis. So we do that by Mac VST. So people familiar with YoSys did recognize this output. Now, it has been processed by YoSys and then the output BLEF file has been translated into a VST file, VHDL structural, which is loadable by Coriolis. If you look at VST, you see that we have the relevant file. So now we can run truly the place and route step. This makes TGT and run the script. We run do design. Here we go. So I activated the step by step. It is a debug feature, but it is useful for demo also. So what you see here is the core and around it is the harness, because it's the demonstration using the Caravelle harness for the Google SkyWater program. So we have done the design. Now we have built a clock tree. I will make a zoom on the center area, which is where the chip, the processor is located. So you see the clock tree, unconnected yet, but built prior to the placement. Then we perform the placement. This is the simple algorithm at works and we display the lower bound placement. That is the one with the shorter distances. And as you can see, the lower bound and the upper bound converge slowly together, which explains the progressive spreading of the cells over the area. Now we have performed the placement. So I will start to display one net for nicer demonstration. I did choose this one, C7. Here we go. So what you see now is the fly line representing the net. I will perform a zoom around the net. So we see it better as it progressively refined. We are before the global routine, so we only see a fly line connecting all the terminals. Now we see the global routine appear. Here we have the global routine and the trunk of the net. Here we see the global routine. I will zoom back on the net. Here we go. Then after the global routine comes the detailed routine. And finally you see it has just submitted a small adjustment and the global routine is finished. The detailed routine is finished. And you can see that any overlap has been solved. Now the whole design has been finished and you see that it's connected to the harness. We change the power distribution of the Google sky water harness by only putting vertical lines. Here big vertical lines that make them big vias to the alimentation. If we zoom again we can see them. So now you have a completed design which can be sent to the Sky Water Fundry through the Google program. Thank you for your attention. Thank you very much for that Jean-Paul. We have any questions? Please do type them in the main conference room. It's nice to see that we have had some questions and interactions. Thank you Jean-Paul for interacting with people. Adam does a specific PDK and implies a specific foundry. Our more open PDKs are coming by any chance. Our more open PDKs coming by any chance. This is a delicate issue. The foundries are a bit jumpy. Sky Water I believe through the Google sponsored program are considering 19 nanometer. There is a huge number of hoops to jump through before that happens. What else we got? This is your parameterised approach. Is your parameterised approach a template or was it a layout? Yes, it can do that. It is a virtual layout which generates the actual layout that is compliant with the PDK. The NSX lib was done before that and is independent of it. That is a different approach. You can either use NSX lib or you can use flexlib. You can see Adam typing. This will transfer over to the carousel room. Yes, of course, Jean-Paul, no we weren't able to hear you. Your microphone is still listed on mute. Yes, Jean-Paul, just waiting for Adam and Nathan to type. Jean-Paul, we are waiting for you to resolve the microphone. Nathan's question, do you extract timing information from the generator standard style through spy simulation to create lib files and then it is a place and route timing driven? We are still waiting for Jean-Paul to sort out the microphone. Do you extract the timing information from the generator standard styles? Yes, we do. We do extract the timing information from the generator standard styles. Alright, Marie Minervais put it into high task Jiggle to do a transistor and a mixed transistor gate level static time analysis. Is the place and route timing driven? Yes, I believe so. That's a key feature of it. Adam is asking who is going to disconnect and reconnect to the Q&A clinic to see if we can solve the microphone issue. Who on practice will be using the tools research institutions, small companies or individuals or three? One of the key things at the moment is the NDAs that are sort of basically equivalent to a worldwide cartel. They basically prevent research institutions and security researchers from doing academics from being able to publish any of their results. So this is one way to break that. Yes, 180 nanometer is extremely cheap and is the largest and most commonly used geometry in the world. Last time I checked it's only in production. You're only looking at $600 for an 18-inch wafer. The whole idea is that yes, you will be able to use Coriolis-TOOT to your full GDS-2 files, rather than having to spend a quarter of a million dollars a week on licensing of proprietary tools. That's a mad cost, which makes it completely impractical to consider doing an ASIC. The volume said you have to sell that you will be enormous to cover your costs, whereas if you've got LibreOpen place and route, many more people can consider it. And yes, I'm an individual and I've been doing layout and working with Sean Paul on this. Okay, reconnected. Nathan, I can see your typing, so just wait for you. Yes, Fari Fablas. Yes, they are. So there is now Chips for Makers house down a Skywarden 130 nanometer flex-lib port. The repository. So previously in the chat I linked the FreePDK45 version, so you can do your layout with that. But of course FreePDK45 is an academic PDK, it's not intended for actual GDS-2 files sending to a foundry, because there's no fab that actually exists, which would take it. But this Chips for Makers PDK Skywarden 130 nanometer is intended to work with the Skywarden 130 nanometer process. Thank you Sean Paul there, we do not have timing for different place and route yet, should be working on features this year. Okay. So there is at least one project that has submitted to EFABLAS using Coriolis-2 with Chips for Makers 130 PDK. Hi David, nice to see you here. Thank you Nathan and Sean Paul really appreciates that. It's been a lot of work. John Paul's project has been running since 1993, 1994. One of the things, there is a Japanese foundry in something very very large. Free software projects come submit to, I forget the name of the university but they're working with a company that uses for training purposes. So, the PENGIMP 3 was a 180, yes, exactly. 400MHz DDR3 memory was done in 130 nanometer. So, 800MHz DDR, so 400MHz clock rate. If you look you can see DRAM chips were in 130 nanometer, it's perfectly possible. Yes, Elias started in the 90s here. And it's been slowly converting from that C and C++ code over to Python, but keeping the high performance parts in C++. If anybody actually wants to try it out for themselves, I'm just going to go whilst there's a question and it's still running. I'm going to drop a link to the Coriolis install script which we've been running within the LibreSupp project. There you go, if you want to actually just run an automated auto install, there you go, there's a script to do that, which I've dropped into the chat. It's at git.librestock.org and it's on the dev end setup repository. If anybody would like to continue talking with Jean-Paul, I will drop the link in the chat and the last two. Thank you very much Jean-Paul for such a fascinating talk, it's really appreciated. It's really important to work that you're doing.
Sorbonne Université, in collaboration with Chips4Makers and LibreSOC are working to provide a complete FOSS toolchain to make ASICs in mature technological nodes, that is, no smaller than 130nm. We take a circuit description in HDL, synthetize with Yosys but instead of targetting a FPGA, use an ASIC standard cell library to get the RTL description. From there, with Coriolis2, we perform the classical steps of a RTL to GDSII flow, that is, placement, routage along with very basic timing closure. We will particularly focus on last year progresses and present the planned improvements and new features for 2022.
10.5446/56969 (DOI)
Hello, everyone, and good day. My name is Mohammed Kalsim. I am the co-founder and CTO of E-Pedalus. And thank you very much for making the time to join the presentation. And I hope it would follow up with questions that are useful to everyone. So I'm going to give a little bit of an introduction about myself. And basically, I come from the chip world, from a long career in the wireless industry. My background is in analog circuit design. And I joined Texas Instruments when TI was starting the era of the chip development for smartphones in general. And then the smartphones later. And if you know a processor like OMAP, then you'll find that the analog on OMAP Partial F3 and OMAP4 and a bit of OMAP5 were my responsibility. So if you hate it or like it, you may blame it on me. And here are some of the devices that I was involved in indirectly at some point. I also am a thinker and I'm a hardware enthusiast and always design and look for what the chips do in smart devices or any devices. So that's my habit from since I was a kid. I open things down. Sometimes when I was a kid, I wasn't able to bring them back. But now that's my tear down the device. Device doesn't understand how they're constructed and what chips are in. And that's a passion of mine as well pretty much for a lot of the new devices that are out there. So I'm going to dive into the talk right away. And it's basically talking about the chip design and how we can approach the problem of the inaccessibility and the restrictions to innovations for this industry. So I have a friend of mine who just put together this little abstract view. And on the left side, it says, and it shows that as the process technology evolves from down in the feature size, the development cost of the process and the designs around it, they increase actually exponentially, the number of capable companies or people who have access to this technology and can design with it goes down. And that means the number of, generally, number of designs goes down. So it's a very simple view. Now, when the number of designs goes down, or designers rather go down, that means the number of or the freedom to actually thinker and have more ideas move into silicon becomes restricted, which in general restricts innovation. This is another view, it's a financial view. And this would probably be the last financial slide on this. It is actually. And it shows that over the years, that's the number of companies, companies, not individuals even, that have the access to advanced node technologies or process technologies. It's just going down over time. And that didn't change really over beyond 2018, since we're now in 2022. And the net by itself looks like the car companies. As it converges, you end up with a few car companies that have the market and the people who have ideas in the cars and engines. They just can't just come up with an idea anywhere and just say it works. So this is just the dynamics of the industry. Now, on the other side of the industry, the need, so there's so-called the long tail. I'm representing here with orange. And I intend it to drive this way, because I usually call it the very, very long tail. And what is that is that it's the customized applications for specific markets or specific use. And typically, people today, the hardware developers, they use what's available in the FPGA world or standard devices from the standard semiconductor, either processors or chips, which typically leads to different living with the form factor or extra capability on the chip. Or I need the five regulators or three regulators, but I'm getting six or just I need one extra. So there is what I call here. And also that's a term that I got from somebody. For my mind, it's a right-sized compute, right-sized compute, meaning that I can, for every problem that I am facing, I'd like to size the chip or the compute capability and the power, especially if it's a battery-powered device, to the application. Now, on the left-hand side, this is a traditional approach where smartphones, you have tens of millions, maybe tens of millions of units or hundreds of millions of units, which justifies the development cost around these applications. The problem with the yellow or the orange part is that they're lower volume. And generally speaking, it is hard to justify a cost of an ASIC for these orange areas. Now, and then the typical answer for that is the standard product. Then now we believe that the number of these applications are gonna be tens of thousands. So if you wanna achieve both the right-sized compute and the tens of thousands, you really need designer. Or you can't just have one standard product and you need designers around the world, which is the problem. So we wanna have a thousand X developers, roughly, to be able to deliver to that. Now, when people look at the LIMTail, they keep asking me, what's the killer app? I'm like, well, there's no killer app. I don't know what the killer app is. The only way to have this work is to have people try and communicate with the market or the end users and come back with iterations. So in order to do that, it was happening in the software world. It is happening in the software world in a way that is efficient so that you can compile something, make it an app and put it out and get feedback and improve. And you can see that in the software world. The only answer for the demand, to meet the demand for the customization of hardware and chips is a community-based approach, meaning that instead of saying, I need a company or 10 companies to deliver to the world, I will just say, we need the world to deliver to the world. We need to community around the world to learn and become capable of making the custom applications. And it's actually grows with the demand. And also it has a variety of expertise. You can tell from the open source community, for example, you can amere out of expertise that can actually deliver value. And then the last thing is the natural selection. This is beautiful is that people choose what they like to work on and what they're passionate about and probably what they're good at versus working in a restricted environment where you just have to do what you have to do. Now, I'm gonna jump to E-Fabulous. Just a quick overview of why E-Fabulous exists. I'm just gonna say it's for meeting that goal. So in order to get these 1000X account in designer, we need to simplify the process. Typically the process of making ships is very complicated. So we wanna simplify to make it easy and adaptable by a lot of people. And again, our user number 1000X don't quote me on it, but just a multiplier that it is actually hard to achieve with the current traditional approaches. So yeah, so we need the 1000X maybe more, maybe less, but we need a huge multiplier of the people who are able to design. Now, with either we're gonna have to educate people or change the concept or simplify the process, we need to have a combination because actually it's time to simplify the process because I don't need to know everything about the chip to be able to create a chip. That's what's actually achievable a little bit through the FPGA. So that's what we wanna do. The example that I'm gonna use here is that in the software, the app stores, the app stores, before the app stores, in order to develop a piece of software and market it and sell it or expose it to customers, it's a very hard process. When the app stores showed up, whether it's iOS or Android, what happened is that there's a development tool set that is very robust and has unreliable. And then there's a business process, structured flow of quality control or at least feedback and the connections to customers. So now if we can do that, then you can see, you remember in the software or the app stores, it just exploded into millions and millions of developers anywhere from kids all the way to any age that wants to do something on their own without have to worry to actually structure a company or so. And the process also has become simplified because of the tools that are available. Now in the chip world, if you think, some of the chips are actually this, you can be designed by one to four engineers. And there are chips that require 100 people, but my experience says you can get a lot done within 10 people if they know what they're doing. And that's hard to bring in one room if you're hiring, maybe, but if you're working with the community, you'll probably find these for 10 people to do something really great. Now, in order to do that, we need to democratize the access to the needed, what's needed to chip design. It's not as simple as a development kit. And then, again, business process and the connection to customers. So now this is how we're gonna go. Now, in order to get that, there are, it's a little bit, it's much more complex actually than just a development kit because it's not one owner. It doesn't come from one place like Apple, Google, other open markets places. So in order to produce a useful chip and that is usable by the users, and could be the end user of a smart lock, something that sits in an air purifier or air quality sensors or systems, you need to know the access to the market. You need to have knowledge that you may not have. IP or blocks that are currently gated by generally NDAs and on access costs, EDA tools, similarly, PDK and affordable manufacturing and a reduction of the cost of manufacturing. So there are partial solutions that exist and they're admirable work pieces, but it still doesn't provide that, you know, I want it once per all. So what we wanted to do is a complete holistic solution. And in order to do this, it's basically boiling the ocean. So the right thing is to do it one at a time by solving everyone with a different tactic. And generally speaking, I will leave this, the comments maybe for the questions, but that's how we did. So the key is that instead of having just a developing kit, then it becomes like a stack in a box basically that you can be available to everyone and designers can collaborate and define and develop and commercialize their products for application for long-tail applications. To start, we started with the EDA, that's one aspect of it. And one of the things is that we wanted basically to give the EDA's flow to anybody anywhere without cost or without the permission or the violation of any licenses. So we assembled a great variety of tools for analog and digital design from open source engines. And we worked to use them to develop real chips. We also, on the IP, this is kind of a recent view. So these are examples of the places you can find. I'm sure there are more, maybe I've missed, but I just quickly, I think given the programs that we're working on right now, you can go to EFAB to see what people are doing in actual silicon IP that is submitted into the Google projects, for example. And then there are other examples for accessing IP in a structured way, like LibreCores or QSoft. Generally, Google is good to find. I'm always surprised with what individuals or groups have been doing in improving what's available for design. Now, in order, as I said, in order to improve that this is actually work, we took it from ourselves to develop ASICs. And we started with a chip called Raven. This chip is a RISC-5 based on a PicoRV32 from ClearWolf. And the IP and or the blocks are analog are coming from XFAB boundary. And this chip is in 180 nanometer. And one thing we intended to do is to make that chip top-level open source. So if you go to GitHub at this location, you're gonna find the top-level very long is there including behavioral models for the analog so you can simulate the entire chip in a digital simulator actually with the analog presented in real numbers. We also developed another chip called Ravenna. This was based on a request from a customer and it has more resources and it's also open source and it's available for the tinkering and adding additional features. Both of them are available on our platform so you can access in black box mode the IP. These chips are, here we have four major chips that we started with. Now, on the left is that Raven and Ravenna RISC-5 microcontrollers and then the top right is actually an open source chip but the core is ARM based. So it is not when I say open source chip, I mean that the top-level connectivity or a net list. Now, it has a lot of components that are not open source but it is a representation or a step forward in that direction. And then in the bottom right, this is a chip called Hydra that is purely analog for characterization testing. All of these, it's important to say all of these have been completely designed and verified with 100% open source tools period. The Raven chip was actually demonstrated in different places, we partnered with other people to make it a part of the other integrations and then this is the Hydra chip just for those who wanna take a look at it later. It's an array of analog chip. One of the interesting things is that Raven has a top-level open source chip and the core being open source. The NAC, one of the NAC research labs looked at it as this is an ideal for trust through transparency. So they wanted to look at as, use it as a kind of an authenticator for our ACU processor for a little bit of a rooted trust. And they requested Raven, we send them the boards and they developed several applications as you can see here but they also specced with us the Ravena chip and we delivered it to them. Now, this is what general overview of eFabulous, but now as we published a Raven chip all over the world and we had the attention of different places, especially Google. And then Google's philosophy in this involvement was if you can have a continuous integration in software, why don't we have a continuous integration concept in the hardware or specific chip world, design something, build it and measure it or learn from it and then do it again and keep iterating. In order to do that, you really need to have a cost structure and availability and access to make it worse than what. So the Google program was, it started as Google would fund six manufacturing runs and that means and you'll see why it's 240 chips. And now when I say at least, it's because it's actually getting more now, there were four last year and there's another, at least another four this year, 22. And the first of which is in March 21st this year. The submitted designs, and by the way, just to get into this, you don't need to go through big hassle, you can start designing, go to eFabulous.com, just show the intent that you have a project and just write the abstract of the project and then some, if you finish before the deadline, meaning that you pass all the pre-checkers, then, which are available, automated and open source to you, then you're in. So it's a first come, first serve and Google's purpose from that is to just mobilize the wheel in terms of having people to learn and design and learn and re-iterate. So if your design is perfect and you're not taking too much risk or taking risk, so it is okay to get in and learn something. So if you don't be intimidated by that chip concept because it is, with proven that, you know, from a variety of ages and a variety of software and hardware experiences, things worked out. Here are some of the links, you can go and start. And we've proven, as I said, many of the users and designers have been not, almost half from the Google program, haven't designed the chip before and they have chips and they actually demonstrated a success. For more information, go get it and see. One of the programs, I'll go to the eFabric.com and see. The interesting things about the Google program is that the tools are open source and then you get a chip that is as a reference or a platform that you can modify and add to it your design. And then you get development boards, not just the parts and abundance of parts, obviously, if you wanna build your own boards. The key component of that program is a partnership with Google, SkyWater and Feblis. They started with the open sourcing, the PDK, which is a representation of the process technology. It's the models, it's the libraries, the IOs, different views and documentation. So it is available without an NDA, which has never happened in the history of this industry as Foundry would actually open source the PDK of a reliable process as is. Just to give a background, if you're familiar with the PSock devices from Cypress Semiconductors, SkyWater, it was a spin off, is a spin off from Cypress back in 2017, I think. And the process that is being used, the 130 nanometer, is the process that's being used for PSock devices. So it is truly professionally done for industrial grade manufacturing. Now, by people say, what is a 130 nanometer? I work personally on 130 nanometer, 17, actually, almost 20 years ago. That was my part of the development of the analog back then. Well, until today, you can do a lot. If you're gonna see, go see what people did today since the last three years on this link. However, just an example I wanna bring up is, if you look at the Intel chips themselves, back in this, the age of 2001, which is roughly about 20 years ago, you'll see that even 180 nanometer, you're able to get the speed of about 1.7 gigahertz. Of course, Intel has a lot of this customization and so to get there, but I'm saying that transistor pitch doesn't mean you won't get performance. And also some of the data that are borrowed from our friends in URO Practice, they have a very good annual report. It shows here that in the microcosm of use in semiconductors from a URO Practice standpoint, there's about 50% of the process technology or data project on the process or their customers, they're below something that's bigger than 90 nanometer, which is 1.10, 1.30, I don't know, I can see it. Now, part of the project for Google is that we develop almost a new compiler, right, for digital functions. So that means if you know very long, then you develop something in better log. And recently we made that in Python using something like Litux. You compile your code and you get a functioning design, maybe not the best performance, maybe not the best area, but it works so you can iterate. And the intent here was to make it available. The Google partner with me is Dimansal, always says, I'm a software guy, I don't know what a DRC is, I don't know how to fix it, I just want a complete design that works and manufacturable and I can learn from it. So if you do that and you make it look like a compiler, I use the new compilers all over the place, but I don't know how they work exactly. And I get software that I actually use. So if you apply this concept on the digital design, well, that means software developers should be able to do that without a blame. And that opens, starts opening this thousand X more, at least people that are capable of making a digital design. Now it's also in the analog design, same. If you learn the basics or you wanna go further, also that actually with the great help from the community, it's become possible. Now, both of them, this is actually again the community. So we're working closely, this is a collaboration between the Open Road team and the flow it's called Open Lane, that the E-Cab is developed. And it is on several process technologies available now. Analog is the same and it's collaboration with the, for example, in this here, the ZEIS team, Injustice team, and the X-Skeem developer, Stiff and Shippers. Now, part of the project also to simplify, so all of these things that I'm going to notice, I'm going to simplify layer by layer. We use the concept of Caravell because that's the, when you look at it or look it up, it's a ship that has specific properties that the Portuguese used back in the 15th, 16th century. Well, Caravell is a ship that is almost like a ship that has cargo space or project space that you can put your own design in it. It has resources, it's an open source ship, and it's using components from like OpenRAM and the BigRV32, which are open source generators and IP. And this ship, you also open source a new, because of the processes open, you'll see it all over the, it's open on GitHub. And here's the sum of the features of the on-chip features that you can utilize for your design. The way it's being used is this, the designer designs the block and then it gets dropped into the chip and that produces a user-specific Caravell. The designer only focuses on the left side of the slide. They don't have to worry about the Caravell itself except from the models and the behavioral verification to make sure that things work. And we plug it in into the Caravell master. Now the designer also gets actually 300 packaged parts, plus five boards, and I mentioned that early. So it is straight up, you can push it into the, is USB connector and get working. And the board is like here. So this is one example of the Caravell boards with a specific design on it. Now we, there are multiple directions, multiple tracks that we're improving on right now, again with a collaboration with the community. There's also one piece of news here is that because of what Skywater Foundry did, it got the attention of other boundaries. And then there will be, there's a plan for a second foundry and you may hear about announcement of that too. Just the engagement here that we can see when the Skywater PDK was opened, it gave us like, basically it's like a barrier of behavior against projects that are accessible for years. And also it gets downloaded about 700 times a week. We also have the Slack community that is rich with almost 2,500 people. You can join and get your self-invite from there. Now the first shuttle went with Google, was an interesting one, we called it in PW1. You can see the designs up there too. But it had a variety of contributors from individuals to companies. It was a quick book. Again, it's free for the designer funded by Google for the time being. And then, part of PW1 was actually to actually get this picture done. It's the first time the open source hardware, at least to my knowledge, they get engraved on actual chip that is totally open source all the way to GDS. There's no compromise here. And just to mention the community contribution, the John McMaster here, who's pictured in the bottom left, has actually took that chip, decapped it, and he has an electron scanning microscope at home to actually do that. So that's kind of a community interest. And then in PW2, again, I'm not gonna go over the whole design, but you can see the links. If you go to the link below, you'll see the full overview. And as well as the GitHub repositories. This is just a candy for the eye to see the two different, the 80 chips plus the test chips for the foundry up there. Now, the Google, when we found that it's very important that people say, I wanna open source my design and I wanna guarantee my spot and I wanna control my schedule. So we created an offering called Chippick Night. Chippick Night is basically the same thing for Google. Just shift it into eFabulous completely and to you actually, to the designer. He says, okay, I'm gonna get my own design. I don't have to open source it or not, that's up to me. And then I will book my space for 200 bucks starting designing, to start designing. And then you will get actually 10 millimeter squares inside the chip, which is I mentioned earlier, but it is actually, it's still the real state. I always say it's a framed house with plumbing and electricity and then you bring your own appliance. And these are some of the pricing here, but the pricing is actually intended to not to show the dollar numbers. That's it show that comparison. You can either get 100 parts with the QFN 300 with the WSP, which has never happened. Or if you're doing a low volume production, you can pay 20,000 and you get a thousand units, which is almost basically $20 per chip in all includes. These are options of Chip Ignite. Some of them are the same as the Chip Ignite as the area as it is. And then we got requests to say, if I don't wanna use the management area or the logic here, I wanna the area all myself. So we started offering that as well. Here's the schedule for Chip Ignite and MPW for Google. So these are two Chip Ignite data points. One of them is April 8th, the Google shuttle that's upcoming is March 21st. And the next Chip Ignite or second Chip Ignite in 2022 will be in June. Go, you know, can go to eFeders to come to see the details. The users of Chip Ignite, not the Google program, basically started with the universities. Here at Stanford University used it for a course. There was basically the whole course is designed and end up with a go to fabrication and then you come back and test it. Also the IEEE, they created a competition. So they actually got 10 Chip Ignites and then it was a great competition. You can view it and see how the, we had like 56 proposals in it, great designs. Startups also said, okay, I can use Chip Ignite as it is. Basically I can put my own designer like the blue in the middle here design. And then I have a product basically for low volume. I don't have to optimize the area because it makes sense economically and I, but I can get the market very quickly to my customer. This is back to the IEEE. It also shows that these were 10 projects from all over the world. The last six projects, you can see how people utilize one slot to multiple designs instead just having one slot for one project. This is the part where I showed the university. There's multiple aspects of the universities have started talking to us to actually adopt that either in the courses or in parallel sessions like the capstone projects and graduate research. And that's it. So I close here and then when you get the slides, you'll find I'm gonna flash through this very quickly. Fun from the community that I borrowed and just shared different people have posted different pictures and you'll be able to flash through them. And these are all contributors from different places. And they know their names, but you can find them also on Twitter. And it shows how passionate and competent the community and to actually get something done. This is a RAM test chip that Andrew Zonenberg created a board for testing it. This is a met then obviously people know met then they used one slot to create a course that has 16 slots. This is the parallel. This is us. Actually, this is me on my desk. And then other people like TNT or Sylvain Mone. This show you see the quality of the sharing, how people share things and incredible. Never happened before. And get the chance just to take a look at this. And that's my last slide. Thank you. And then open for questions. Ryan. So I look actually or okay now I can hear. Thank you Mohammed for a great talk and if you want to read any questions from the chat you'll be welcome. For instance let's see how much will that normally cost if not using cheap ignite say another shuttle. Well the most important thing I would like to say here is that the word shuttle is kind of a, the word shuttle is actually kind of tricky. As I mentioned a shuttle typically is it gives you an area on silicon so if you want to compare it would be wouldn't compare that's the area because you're getting an area with as I mentioned with the older resources around it to make work like logic analyzer an actual chip around your design. So definitely if you compare it so this would be that 10 millimeters for 10,000 and if you compare at this level even if the cost is the same but you wouldn't get the same resources around it and packaged devices and boards. Okay we have a question about when this lack will be bridged to IRC maybe. It is actually kind of to bridge 100% so if you go to the IRC servers you're going to find the space sky water dash PDK and I should actually I should have maybe I'll send the link later. And what else is Idra a reference design? It is a reference design the design but in the it's open as well but the the blocks in it are not however you can replace them now with a lot of the NPWs coming from blocks are coming from Google N chip Ignite open source. All right I think I'll ask a question myself. Go ahead. Okay what kind of interfaces the the kit has it is high speed can you put a USB maybe. Right now on the the board has USB you can connect it to the the the computer and then it comes with an open source just flashing software and and then you can go from there. Now the chip doesn't have USB yet at least it is right now it's either a UART or a spy interface out there and on top of the GP the general purpose IOS. The general purpose IOS maybe they are high speed what speed could they reach? For this process as they are they they're not high speed they're actually 60 megahertz maximum as they specified by the original design that's Cypress. However we have a flavor where you can designers on on the shuttles actually created their own like LBDS designs so you there will be a variety of highest higher speed IOS that are coming from the community shown you'll see it because people everybody is actually it feels limited by the 60 megahertz. Great so I think we'll wrap up and then anybody which wants to continue the link will be available soon to these two to the room where you could you can continue the chat thank you. Great thank you so much have a good day everyone.
This presentation by Mohammed Kaseem, the CTO of e-Fabless, will outline how e-Fabless is empowering Libre/Open VLSI Hardware development. There are two initiatives: ChipIgnite which provides significantly-reduced cost Shuttle runs, and the Google-sponsored Skywater 130nm Programme.
10.5446/56971 (DOI)
Hello everyone, today we're going to talk about polyglot cloud native debugging and a bit about APMs. We don't have much time so I'll get right to it. But first a few things about me. I was a consultant for over a decade, I worked at San, founded a couple of companies, wrote a couple of books, I wrote a lot of open source code and currently work as a developer advocate for Lightrun. My email and Twitter accounts are listed here so feel free to write to me. I have a blog that talks about debugging and production issues at talktodaduck.dev. It would be great if you check it out and let me know what you think. I love APMs, they are absolutely wonderful. I'm old enough to remember a time where they weren't around and I'm so happy we moved past that. This is absolutely amazing. The dashboard and the details, you get this great dashboard with just everything you need. Amazing. We're truly at a golden age of monitoring. Hell, when I started we used to monitor the server by kicking it and listening to see if the hard drive was spinning properly. Today with Kubernetes, the deployment scale to such a level that we need tools like this to get some insight into production. Without an APM, we're well not blind as a bat, but it's pretty close. A lot of the issues we run into start when we notice an anomaly in the dashboard. We see a spike in failures or something that performs a bit too slow. The APM is amazing in showing those hiccups. But this is where it stops. It can tell us that a web service performed badly or failed. It can't tell us why. It can't point at a line of code. So let's stop for a second and talk about a different line. This line. On the one side we have developers. On the other side we have the ops or DevOps. This is a line we've had for a long time. It's something we drew out of necessity because when developers were given access to production, well, I don't want to be too dramatic. But when developers got access to production, it didn't end well. This was literally the situation not too long ago. Yes, we had sysadmins, but the whole process used to be a mess. That was no good. We need a better solution than this hard separation because the ops guys don't necessarily know how to solve the problems made by the developers. They know how to solve ops problems. So when a container has a problem and the DevOps don't know how to fix it, well, it starts a problematic feedback loop of test, redeploy, rinse, repeat. That isn't ideal. Monitoring tools are like the bat signal. They come up and we the developers, we're Batman, Batwoman, Batperson. All of us heroes. We step up to deal with the bugs and we're the last line of defense against their villainy. Well, we're code about people, really, kind of the same thing without the six pack abs. Code about man needs to know where the crime or bugs are happening in the code. So these dashboards, they point us towards the crime we have to fight in our system. But there's that's where things get hard. We started digging into logs trying to find the problem. The dashboard sent us into a general direction like a performance problem or higher error rates. But now we need to jump into logs and hope that we can find something there that will somehow explain the problems we're seeing. That's like going from jet engine back to Stone Age tools. There are so many problems, log processing platforms that do an amazing job processing these logs and finding the goal within them. But even then, it's a needle in the haystack. That's the good outcome with the logs already there waiting for us. But obviously, we can't have logging all over the place. Our billing will go through the roof and our performance will, it will suffer. We're stuck in this loop of add a new log, go through CI CD, which includes the QA cycle and everything. This can take hours at best. Then we produce the issue in the production server with your fingers crossed and try to analyze what went on. Hopefully you found the issue because if not, it's effectively runes repeat for the whole process. In the meantime, you still have a bug in production and developers are wasting their time. There just has to be a better way. It's 2021 and logs are the way we solve bugs in this day and age. Don't get me wrong. I love logs and today's logs are totally different from what we had even 10 years ago. But you need to know about your problem in advance for a log to work. The problem is I'm not clairvoyant. When I write code, I can tell what bugs or problems the code will have before the code is written. I'm in the same boat as you are. The bug doesn't exist yet. So I'm faced with the dilemma of whether to log something. This is a bit like the dilemma of writing comments. Does it make the code look noisy and stupid or will I find this useful at 2am when everything isn't working and I want to rip out the few strands of hair? I still have left because of this damn production problem. The buggers are amazing. They let us set breakpoints, see call stacks, inspect variables and more. If only we could do the same for production systems. But the buggers weren't designed for this. They're very insecure when debugging remotely. They can block your server while sending the bug commands remotely. A small mistake such as an expensive condition can literally destroy your server. I might be repeating an urban legend here but 20 or so years ago I heard a story about a guy who was debugging a rail system located on a cliff. He stopped at a breakpoint during debugging and the multi-million dollar hardware fell into the sea because it didn't receive the stop command. Again, I don't know if it's a true story but it's plausible. Debuggers weren't really designed for these situations. Worse, the buggers are limited to one server. If you have a cluster with multiple machines, the problem can manifest on one machine, always or might manifest on a random machine. We can't rely on pure luck. If I have multiple servers with multiple languages and platforms crossing from one to another with a debugger, well, it's possible in theory but I can't even imagine it in reality. I also want to revisit this slide because I do love having APMs and looking at their dashboard gives me that type of joy we get from seeing the result of our work plotted out as a graph. I feel there should be a German word to describe that sort of enjoyment. But here's the thing, APMs aren't one thing. The more you instrument, the more you have runtime overhead. The more you have runtime overhead, the more hosts you need to handle the same amount of tasks. The more hosts you have, the more problems you have and they become more complex. I feel Schrodinger should deliver the next line by observing we effectively change some of the outcome. An APM, some people use that as an excuse to avoid APMs, which I feel is like throwing away the baby with the bathwater. We need APMs. We can't manage at scale without them. But we need to tune them and observing everything isn't an option. Thankfully, pretty much every APM vendor knows that and they all let us tune the ratio of observability to performance so we can get a good result. Unfortunately, that means we get less data. Couple that with the reduction of logs that we need to do for the same reason and the bad problems we had having production just got a whole lot worse. Let's take the Batman metaphor all the way. We need a team up. We need some help on the servers, especially in a clustered polyglot environment where the issue appears on one container, moves on to the next, etc. Remember this slide. We need some way to get through that line. Not to remove it. We like that line. We need a way to connect with the server and debug it. Now, I'm a developer, so I try to stay away from management buzzwords. But the word for this is shift left. It essentially means we're letting the developers and QA get back some of the access we used to have into the ops without demolishing the gains we had in security and stability. We love the ops people and we need them. So this is about helping them keep everything running smoothly in production without stepping on their toes or blowing up their deployment. This let's lead us here. What if you could connect your server to a debugger agent that would make sure you don't overload the server and don't make mistakes like sending a breakpoint or something like that? That's what Continuous Observability does. Continuous Observability is complementary to the APM. It works very differently. Before we go on, I'd like to explain what's Continuous Observability. Observability is defined as the ability to understand how your systems work on the inside without shipping new code. The without shipping new code portion is key. But what's Continuous Observability? With Continuous Observability, we don't ship new code either, but we can ask questions about the code. Normal Observability works by instrumenting everything and receiving the information. With Continuous Observability, we flip that and we ask questions, then instrumentation is made based on the questions. So how does that work in practice? Each tool in this field is different. I'll explain the LATRIN architecture since that's what I'm familiar with and I'll try to qualify when it's different from other tools. In LATRIN, we use a native ID plugin to VS code or JetBrains IDE's such as IntelliJ, PyCharm, WebStorm, etc. We can also use a command line tool as well. Other tools sometimes have a web interface or CLI only. This client lets us interact with the LATRIN server. This is an important piece of the architecture that hides the actual production environment. Developers don't get access to production area, which is still the preview of DevOps. We can insert an action, which can be a log or a snapshot or a measurement metric. I'll show all of these soon enough. This talk will go into the code portions soon. Notice that the LATRIN server can be installed in the cloud or as a SAS or on-premise and managed by Ops. The management server sends everything to the agent, which is installed on your production or staging server. This is pretty standard for all continuous observability solutions. I don't know how other solutions work, but I assume they are pretty similar. This means there's clear separation between the developer and production. As you can see, the DevOps still has that guarding line we're talking about. They need to connect the agent to the management server and that's where their job ends. Developers don't have direct access to production, only through the management server. That means no danger to the running production servers from a careless developer like myself. The agent is just a small runtime you can add to your production staging server. It's very low overhead and it implements the debugging logic. Finally, everything is piped through the server back to your IDE directly. So as a developer, you can keep working in the IDE without leaving your comfort zone. Okay, that should raise the vendor alert right here. I heard that bullshit line before, right? APMs have been around forever and have been optimized. How can a new tool claim to have lower overhead than an established and proven solution? As I said before, APMs look at everything. A continuous observability tool is surgical. That means that when an APM raises an interesting issue, we can look at a specific thing like a line of code. When a continuous observability solution isn't running, its overhead is almost nothing. It literally does nothing other than check whether we need it. It doesn't report anything and it's practically idle. When we do need it, we need more data than the APM does. But we get that from one specific area of the code. So there's an overhead, but because it impacts only one area of the code, it's tiny. It's very localized. This is the obvious question. What if I look at code that gets invoked a lot? As I said, continuous observability gets even more data than an APM does. This can bring down a system and well, we could end up here. So this is where continuous observability tools differ. Some tools provide the ability to throttle expensive actions and only show you some of the information. This isn't a big deal unless you have high volume requests. I think these things are best known shown in a demo because I can talk your ears off, but showing a quick demo can explain this faster. I'll use Kotlin for this demo, but this applies to other languages. I'll skip the setup portion since we don't have much time, but notice we have a free tier you can use freely from the website. This is the prime main app in Kotlin. It simply loops over numbers and checks if there are prime numbers. It sleeps for 10 milliseconds, so it won't completely demolish the CPU, but other than that, it's a pretty simple application. It just counts the number of primes it finds along the way and prints the results at the end. We use this code a lot when debugging since it's CPU intensive and yet very simple. In this case, we would like to observe the variable i, which is the variable we're evaluating here, and printout cnt, which represents the number of primes we found so far. The simplest tool we have is the ability to inject a log into the application. We can also inject a snapshot or metric. I'll discuss all of those soon enough. Selecting a log opens the UI to enter a new log. I can write more than just text. In the curly braces, I can include expressions. I want such as the value of the variables that are included in this expression. I can also invoke methods and do all sorts of things, but here's the thing. If I invoke a method that's too computationally intensive, or if I invoke a method that changes the application state, the log won't be added. I'll get an error. After clicking OK, we see the log appearing above the line in the IDE. Notice that this behavior is specific to IntelliJ or JetDrain's IDE's. In Visual Studio Code, it will show a marker on the side. Once the log is hit, we'll see logs appear in batches. Notice I chose to pipe logs into the IDE for convenience, but there's a lot more I can do with them. For now, the thing I want to focus on is the last line. Notice that the log point is paused due to high core rate. This means additional logs won't show for a short time since logging exceeded threshold of CPU usage. This can happen quickly or slowly depending on how you're observing. That's a relatively simple demo. To whet your appetite, let's pull out the big demo guns. I decided to take on Netflix and build my own movie service. Seems easy enough. Turns out it isn't so much, and those people on Netflix sure know how to build applications. This is rough. I'm terrible at front-end, so I hired a developer to build it and react. He also built a Node.js server, which was pretty intuitive for a front-end person. This is actually a pretty sensible architecture. It lets the front-end developer handle all their stuff, including the start of the back-end area. Node.js scales reasonably well and can handle the stuff it's good at, communicating with the UI front-end, etc. Node.js also scales reasonably well. So we've got a pretty good deal overall. Storage transactions and reliability are a decade ahead in the world of Java, so using Spring Boot for the back-end, heavy lifting also makes sense. I know it, and I built a lot of stuff with it. This is a merge of both best of each world has to offer. It also lets us hire the right people for the right job. The problem starts when we need to debug this, especially in production. Finding the source of bugs is a process of digging through logs with different conventions and sometimes even different terms for the same thing. And the problem is we have bugs. So here is the app that will take over the world. As you can see, I have a list of movies and the UI that reminds us a bit of those big guys who, again, are doing a pretty spectacular job. I can add a movie to my list, I can visit my list and see the movies within it, and I can open a specific movie and see details about it. That's pretty much it. But here is where the weird thing. When I click the no time to die movie, there is a problem. I get rid of notice. Instead, it's a weird bug, and I have no idea why it would happen. I think we can debug it using Lightrun. This is the Node.js project that implements the initial backend of our architecture. This is the method that gets invoked when we click the movie and want to see the details. This time, I'll add a snapshot. Some other continuous observability tools call this a capture or non-breaking breakpoint, which to me sounds weird, but the idea is usually the same. Once I press OK, the camera button appears on the left, indicating the location of the snapshot, like you would see with the regular ID breakpoint. Now, let's go back to the browser for a second, and in the browser, we can click the movie and wait a few seconds for the problem to reproduce. Now we need to go back to the IDE, and now we wait a second, and the snapshot is hit. So what is the snapshot? It gives us a stack trace and variables just like a regular breakpoint. We know and love, but it doesn't stop at that point, so your server won't be stuck waiting for a stepover. Now, obviously, you can step over the code, so you need to step by individual snapshots, but this has huge benefits, especially in production scenarios. So what do we see here? We see the idea of the movie, and it seems like the right idea, so this isn't the case of a bad idea passing from the UI. We need to dig deeper into the Spring Boot server. In the Spring Boot server code, we've got the fetch movie details method, which does the actual fetching. I place a snapshot here, and I'll spare you the pain of going to the web every time. I have a curl command that performs the request, so I'm running it in a different window here, and as it's not really interesting. So now we need to wait for the snapshot to get hit. If this seems to take too long, spoiler, it never gets hit, and the reason for that is the cacheable annotation you see above the method. It seems the response from the method is cached. Unfortunately, we can't rely on guesses. We need to verify. So we need to go to the movie manager, which is our REST controller. It handles all the web requests directly, so here we can add another snapshot and again try to trigger the curl script with the hope that this time we'll get it right and the snapshot will get hit. After a bit of waiting, we finally hit the snapshot and in the get method. This means that our request is happening, but the failure we're seeing is because the cache got corrupted with bad data. Why would that happen, and can we continue our investigation? Well, yes we can. The place where the cache might be corrupted is here. In check for file updates, this method runs periodically every 60 seconds and checks the file system for an import file. This is how we add new movies into the system. We place a file within with data and it imports it into the database. It might break existing movies. Let's create a snapshot here and while we wait for a minute for the snapshot to hit, we can use that time productively to delete the no longer used snapshots from before. They will eventually expire, but they're left here in case you need reference. One of the nice things about debugging with snapshots is that normally you don't have to wait. You can just place them over the code and they get hit at some point. Great, the snapshot got hit just in time. So that means this code is working. Now all I need to do is evict the cache with a dummy file and we're done. Or are we? Notice the methods here, the batch update evicts the cache. So it would seem if you're new to Spring, if we'll do that and check for the bug again, we'll see the cache wasn't evicted. The reason for this is that Spring only reads annotations when they're invoked from a different class by default. This is one of those gotchas that often fail developers new to Spring. I can't believe I fell for that, huh? This was relatively simple in terms of observability. Let's up the ante a bit and talk about user-specific problems. So here I have a problem with the trending request. One specific user is complaining that the trending list on his machine doesn't match the trending list for his peers. The problem is that if I put a snapshot, I'll get a lot of noise because there are many users reloading all the time. So the solution is to use conditional snapshots like you can with a regular debugger. Notice that you can define a condition for a log and for metrics as well. This one, this is one of the key features of continuous observability. I add a new snapshot and in it I have an option to define a lot, quite a lot of things. I won't even discuss the advanced version of this dialogue in this session. This is a really trivial condition. We already have a simple security utility class that I can use to query the current user ID. So I just make use of that and compare the response to the idea of the user that's experiencing a problem. Notice I use the fully qualified name of the class. I could have just written a security and it's very possible it would have worked, but it isn't guaranteed. Names can clash and on the agent and the agent side isn't aware of the things that I have in the ID such as often. So it's often a good practice to be more specific with the fully qualified class names. After pressing OK, we see a special version of the snapshot icon with the question mark on it. This indicates that this action has a condition on it. Now it's just a waiting game for the user to hit that snapshot. This is the point where normally you can go to make yourself a cup of coffee or even just go home and check out the next day. That's the beauty of this sort of instrumentation. In this case, I won't make you wait long. The snapshot gets hit by the right user despite other users coming in. This specific request is hit by the right user despite the request coming in. This specific request is from the right user ID. We can now review the stack information and fix the user specific bug. The next thing I want to talk about is metrics. APMs give us large scale performance information, but they don't tell us fine grained details. Here we can count the number of times a line of code was reached using a counter. We can even use a condition to qualify that. So we can go do something like count the number of times a specific user reaches that line of code. We also have a method duration, which tells us how long a method took to execute. We can even measure the time it takes to perform a code block using TikTok. That lets us narrow down the performance impact of a larger method to a specific problematic segment. In this case, I'll just use the method duration. Measurements typically have a name under which we can pipe them or log them. So I'll just give this method duration a clear name. In this case, I'm just printing it out to the console, but all of these measurements can be piped to stats D and Prometheus. I'm pretty awful at DevOps, so I really don't want to demo this that in this case. But it doesn't work. Measurements typically have a name under which we can pipe them or log them. So I'll just give this method duration a clear name. In this case, I'm just printing it out to the console, but all of these measurements can be piped to stats D and Prometheus. I'm pretty awful at DevOps, so I really don't want to demo that in this case. But it does work if you know how to use these tools. As you can see, the duration information is now piped into the logs and provides us some information on the current performance of the method. If I had more time, I'd talk about scaling this further with tools like Airflow and Python or Spark and Scala, etc. These are amazing cases where continuous observability really shines, but we need to wrap things up. So in closing, I'd like to go over what we discussed here and a few things that we didn't have time for. Lutron supports JVM languages like Java, Kotlin, Scala, etc. It supports Node for both JavaScript and TypeScript and Python, even complex stuff like Airflow. We're working hard on adding new platforms and doing it really fast. When we add actions, conditions run within the sandbox, so they don't take up CPU or crash the system. This all happens without networking, so something like a networking hiccup won't crash the server. Security is especially crucial with solutions like this one. One of the core concepts of the server queries information, not the other way around, as you would see with solutions such as JDWP, etc. This means operations are atomic and the server can be hidden behind firewalls, even from the rest of the organization. PII reduction lets us define conditions that would obscure patterns in the logs. So if a user would print out a credit card number by mistake, you can define a rule that would block that. This way, the bad data won't make it into your logs and won't expose you to liability. Block listing lets us block specific classes, methods, or files. This means you can block developers in your organization from debugging specific files. This means a developer won't be able to put a snapshot or a log in a place where a password might be available to steal user credentials or stuff like that. This is hugely important in large organizations. Besides the sandbox, I'd also like to mention that LATREN is very efficient and in our benchmarks has almost no runtime impact when it isn't used. It has very small impact even with multiple actions in place. Finally, LATREN can be used from the cloud or using an on-premise install. It works with any deployment you might have with a cloud-based or container-based on-premise, microservices, serverless, etc. Thanks for bearing with me. I hope you enjoyed the presentation. Please feel free to ask any questions and also feel free to write to me. Also, please check out talktoduduct.dev where I talk about debugging in-depth and check out latron.com, which I think you guys will like a lot. If you have any questions, my email is listed here and I'll be happy to help. Thank you.
It's 2022 and we still use logs to debug production issues? All the unit tests in the world, the largest QA team still can’t stop bugs from slithering into production. With a distributed microservice architecture debugging becomes much harder. Especially across language & machine boundaries. APMs/Logs have limits. There’s a new generation of tools in town… Production bugs are the WORST bugs. They got through unit tests, integration tests, QA and staging… They are the spores of software engineering. Yet the only tools most of us use to attack that vermin is quaint little log files and APMs. We cross our fingers and put on the Sherlock Holmes hat hoping that maybe that bug has somehow made it into the log… When it isn’t there our only remedy is guesswork of more logging (which bogs performance for everyone and makes the logs damn near unreadable). But we have no choice other than crossing our fingers and going through CI/CD again. This is 2021. There are better ways. With modern debugging tools we can follow a specific process as it goes through several different microservices and “step into” as if we were using a local debugger without interrupting the server flow. Magic is possible.
10.5446/56976 (DOI)
I welcome Thanos. I'm not even going to attempt to pronounce his surname. I'm sure you can do that for me. Thank you very much. Thank you very much. Hello everyone. I'm Thanos. My surname is Tragopoulos. I'm from Manchester and today I have the opportunity to present you our open source framework, Tornado VM, which is a programming framework that allows programmers to accelerate the Java applications on heterogeneous devices like GPUs, multicore CPUs, and FPGAs. This is the agenda for today's talk. I will start with a little bit of the motivation of our project. I will introduce you the insights of Tornado VM, then highlight a key feature of our system, which is dynamic application reconfiguration. Then some key cases how Tornado VM has been used from applications to extract performance. Finally, the current, state, and future directions. Let's start with motivation. Why should we care about GPUs, FPGAs? The answer to this question is because they are available. From our system, from small systems like our smartphones, we have our multicores with GPUs. Why not utilize them? Why not exploit all the available hardware that we have in the systems? In data centers, we have seen FPGAs recently being deployed. In the cloud, they have been starting being available in AWS instances. Starting from a CPU, which is in the left side, we can see a nice Lake microarchitecture with eight cores and an integrated GPU. This microarchitecture can achieve, with a GPU, including the GPU, up to one TeraFloat performance, which is good, but it is indicated for control flow execution, so for branches, and for low latency requirements. While if the applications they require to, they have a lot of data that can be processed in parallel, they could utilize a GPU, which have a high throughput to the memory, and they have available up to 3,000 cores to process data. And lately, it's the FPGA type, which is the nice thing with this chip is that it is programmable, so the same device can be reconfigured and be tailored to the needs that the developer needs. So this is intended for a pipeline to parallelism and low latency, but it comes at the cost of programmability, because they are traditionally programmed by hardware description language. So despite all this diversity in the hardware that exists and appears in the right part of the slide, the question is how a programmer can harness this, especially from high level languages like C, C++, even Java, and the answer is by using a programming model, because there is the whole magic. The abstraction comes from the programming model. So in this case, there are programming models for heterogeneous systems like OpenCL and CUDA. And these programming models, they abstract the execution in two ways. So they have the principles of the execution model in which the processor, the CPU, the GPU and the FPGA, the accelerators can be used in an abstract form. So the execution is the following. At first, you copy the data into the memory of your device from the RAM, from the main memory of the system, then you execute, you accelerate the data, and then you copy them out to the main memory. And this way, the CPU, the GPU and the FPGA, they look alike. They are just accelerators that they process data. So then the question is, okay, we have C++, OpenCL and CUDA that they can target all the devices available on the systems. But what about managed languages? What about Java, JavaScript, Python? What about languages that they have been designed by nature to write once and run everywhere? Well, the current frameworks, the current JVMs, they emit code for processors, mostly for x86 architecture. Therefore, currently, there is no framework that allows Java transparently to generate dynamic code for any hardware device, like FPGA, GPU integrated, transparently to the user. And this is the main motivation of our work. So when vision a system that will allow transparently these languages to exploit all the available hardware on the platform, let's go and have a look on the insights of Tornado VM. So in this slide, I will present you the software stack in a top-down order. So let's start with API. Tornado currently doesn't detect parallelism, so it doesn't know which part can be parallelized, so it relies on the programmer to specify that this method could be ideal for acceleration on a GPU. And this is done by exposing our API. So basically, our API is a task-based API, so we have tasks. A task is a representation of a method that could be offloaded on the FPGA or the GPU. And we can have a group of tasks, a group of methods, that they could be offloaded and executed on the hardware in a sequence. So then the tasks, they are forwarded in the runtime in which we have an optimizer that can optimize the execution. So for example, if we have two tasks and the second task is consuming data that they come from the first task, then this data, they don't need to be copied out of the GPU. So in this case, we can optimize the data transfers and we can save energy. And then the runtime emits our byte codes, tornado byte codes, simple byte codes that orchestrate the execution. The byte codes are initially executed on the interpreter and then they are forwarded for lazy compilation into the GIT compiler, which is the GRAL compiler, but it has been extended to apply specialization for the devices, for the execution on the FPGA or the GPU. So we have different type of specialization according to the device that we target. And this is essential because although OpenCL can be portable across any device, the performance is not portable at all, especially when you want to run code that is meant to be for GPUs and you target hardware like FPGA, which is ideal for pipeline execution. So then the compiler emits a specialized code and the code is forwarded to the device drivers where there is the compiler, for example, the NVIDIA compiler or for the FPGAs it can be a high level synthesis compiler that will compile the OpenCL into the final binary that will be executed and will be offloaded on the device. So our system is modular and we currently support NVIDIA, MDGPUs, we support CPUs, multicore CPUs, X86 and AMD, Intel and AMD, and we support also Intel and Xilinx FPGAs. This is now an example of how the user can use Tornado VM, so how he can specify that this code could be parallelized on the hardware, on GPU, for example. So this is a class compute that has one method, the MXM that computes the matrix multiplication of two arrays A and B and the result is stored into the CRA. So the only way that the programmer can parallelize the code with Tornado VM is by using the add parallel annotation, which is an annotation exposed to the programmer in order to indicate that these parallips could be parallelized. So this is a hint and that's all done with the modification of the method. So with this, Tornado is able to parallelize the loops, apply specializations for the hardware devices and then execute it and get performance for free. The only other change that the programmer needs to do is to be compliant with API that we expose. So he needs to create a task schedule, which is a group of tasks. In this particular case, it is S0 and this task schedule will have one task in this example because we have one method, T0, the name of the method and then the parameters of the method. In our interface for the task schedule, we have the stream out in which the user can specify the variable that will have the result of the actual computation and then we execute. Once it is compiled, then the programmer can execute the code on the GPU by just running Tornado and then the class name. Tornado is an alias for Java and all the parameters of the JVM that he uses. Let's have a look now into what is dynamic application reconfiguration, which I think is a very nice feature to have in terms of dynamic application reconfiguration is essentially live task migration. So the tasks, the methods can be dynamically migrated from one device to the other. This is really cool. Let's have a look into the system, how our framework is built in order to support this functionality. At the top, we have the task schedules, which are the groups of methods to be offloaded on the hardware, to be accelerated. Then Tornado forks one thread per device. So for example, for the multi-core CPU, the integrated GPU, the external GPU or the FPGA, including a thread for the hotspot, which will compile the code in the OpenZDK. So each thread compiles the code if it is not compiled. And when it is compiled, the code will be stored in the codecast in order to be reduced the second time. Then the code is executed, is offloaded on the hardware, and then we are waiting to see when it will finish. So we have a barrier at the end in which all the threads are joined. And after doing that, we are able to apply some policies that we call them. So with these policies, we are able to decide what we want to do. Do we want, for example, the first thread that will compile and execute to be the only thread that will execute and then kill all the rest? This is the latency policy, which is intended for applications that they are very critical for latency. The other one is the end-to-end, which includes time for compilation and execution. And the other one is peak performance, which is the policy that has only the data transfers in out and execution. Let's go to see some performance results of this feature of dynamic reconfiguration. So in this figure, we present four systems. Tornado VM is the one that will decide the performance where to be migrated, which is the dynamic reconfiguration. Then we have the CPU, the GPU, and the FPGA. So in this figure, we have two benchmarks, two applications. One is the DFT, and the second one is the embody. And we compare two different policies, the end-to-end that has the GIT compilation included in the time, and the peak performance, which is the execution and data transfers. The interesting part to see from these results is that for small data sizes, well, let me first explain the axis. So the X axis has the data sizes, and the Y axis is the performance against the hotspot. So we start, and we see that for small data sizes, the performance is ideal, the peak performance can be achieved into hotspot. So it doesn't make sense to get the data, copy them out through PCI Express to the device, and execute small computation because the data are not significantly large. As long as the data sizes increase, we can see that the execution in the GPU or the FPGA, it can be significantly higher than hotspot. Then it makes sense to migrate the execution there. Another interesting fact is, for example, on the peak performance, we can see that if the GPU was not present, so if the pink spots were not present, then the execution for large data sizes would be migrated to FPGAs. And this could be significant because it could give energy savings, significant energy efficiency into the system. The maximum performance that we got is up to 4,500X against the Java Sequential Code, and that was on the NVIDIA 1060. Let's have a look about how Tornado has been applied in real applications. So at first I have to say that Tornado is maintained under the umbrella of European Horizon 2020 project, the E2 data, which has as objective to create an end-to-end solution for big data frameworks that they want to target heterogeneous computing nodes. This is the example of accelerating Apache Flink, which is a big data framework. So in this case, the clients are Java developers that they create operators in Java. They forward the operators into the job manager and the task manager eventually who will distribute the operators across the available computing nodes on the system, so into the distributed heterogeneous nodes. Each node can have a GPU, can be configured with different hardware capabilities. And this is the goal of this case. In the second case, it's a machine learning acceleration. So this use case has been developed by Exus, which is the coordinator of the E2 data project. And the main problem here is that patients are going to the hospital, they are hospitalized, they are admitted there, and then they are leaving hospitals. But there is a chance that they may be readmitted, depending on the profile, on the disease, the conditions, and other characteristics, other features. So the idea here is to create a machine learning model, which will accurately predict how possible it is for a patient to return back to the hospital. So Exus saw that by deploying Tornado VM, they can achieve up to 14 times higher performance for a dataset which has 2 million patients data. And the last case that I want to present is a deep learning acceleration. So in this case, we took DeepNets, which is a deep learning framework written in Java. DeepNets currently doesn't have support for GPU acceleration, and we know that deep learning has the potential to be parallelized, because it has many networks, many neural, so it can be processed in parallel. I would like to emphasize here that the current available solutions for deep learning, they are using pre-compiled kernels, so static binaries that you can deploy, also from TensorFlow, and they have bindings for Java, Python. So there's no current framework that can dynamically generate code to the device for the devices. They are stuck with the static compilers. So on the right side, I have an example of how we accelerated a part of DeepNets for the backward propagation method. So this is the original code of DeepNets, and this is the changes that we did. So we added that parallel annotation, and then we created a task schedule for this particular method, one task, and we specified the input and output of the methods to go to the hardware. So we achieved up to eight times higher performance for large datasets. Let's have a look on the current state of the project and the future directions that we have. So Tornado currently is available on GitHub. It is open source, so feel free to go to try our examples, to go through the documentation. We have also docker images available for NVIDIA GPUs and integrated GPUs. I would like to emphasize also that we have tested with any IDE, so you can debug the code for instead of going through the vendor tools, you can use the IDE in order to debug your code in Java instead of using the hardware debuggers and all this painful procedure to develop for FPGAs, for example. So what's next? In the current work, in our work in progress, we are becoming compatible with OpenGDK 11. We are doing optimizations for FPGA and GPU execution. We currently run on AWS instances that they have CPUs, GPUs, and FPGAs. And we're working also on NVIDIA PTX on a CUDA backend. This is our team composed of academics, staff, and PhD students. And of course we are looking for collaborations, so feel free to give us feedback, to talk to us. I'm here with Florin, my colleague, so we'd be glad to have a discussion about our project and feedback. Here are takeaways. I would like just to emphasize that our work is not meant to replace hotspot. We just want to emphasize that the hardware capabilities exist on the hardware, so we may want to leverage for large datasets, so it may work to offload a part of our program on the FPGA, another part on the GPU, etc. So thanks for your attention. We would be glad to discuss about our project and get some ideas and feedback. So thank you very much. Any questions? We've got two minutes. Sorry. Sorry. So you basically schedule the algorithm to one of the hardware stacks that you have, right? Yes. We don't only schedule, we create the code. You create the code? Yes. So my question would be what kind of workloads have you tested this on? Because suppose there's multiple algorithms in parallel, which one would you optimize? Or which one would you run? How would you solve such a problem? It depends on the characteristics of the application, so it's not a fixed solution. So for example, it's not a specific answer that this will go there. So GPUs are not intended for pipeline executions. Where FPGA can give you more performance improvements there. So I think it's a tradeoff. Depends on the characteristics of the applications. So at first we profile the code and then we analyze it. Thank you. One more question. Very quick one. You said you were using the Graal compiler to JIT compile. Were you using Truffle to feed your bytecode into that or were you feeding it directly? Good question. So far we don't do that, but it's in the future work in our plans to do that in order to become compatible with any Truffle language. Thank you very much. Thank you.
Hardware acceleration has become prevalent in most application domains as a means to increase performance, while also achieving high energy efficiency. However, the programming models for heterogeneous hardware accelerators inherently support C/C++, thereby hindering the exploitation of heterogeneous resources from managed languages, such as Java. In the University of Manchester, we have been developing TornadoVM; an open-source software technology that can be used as a plugin to OpenJDK and other JVM distributions to enable hardware acceleration in a programmer friendly manner. In this talk, we will present a practical view of TornadoVM and focus on two parts: (i) analyze what can impact the performance of applications on heterogeneous co-processors, and (ii) how Java developers can utilize TornadoVM to increase the performance of their applications.
10.5446/13816 (DOI)
Can you see my screen also? Yes, I can. Can you see my screen also? I think we should start right? Can you hear me? Can you hear me? Can you hear me? Should we start with 630? I don't know how many participants should be here. I don't know how many participants should be here. Me neither.ong it one or two minute I mostиход.在於 konllhearted Beethoven – seoy leo can opens up k tentations. Istarat jon fungus ko radius bi estilo bir du identified to buy n almost jah uzal pa hot to fuck nested der t n ter 2019, like when I contributed in Gatsby source flown, Gatsby source flown as a G-Socket student and after that, after that I joined the KitKonsom company and my JavaScript experience like I am writing the act from past two years so I think that I am most aware of the react and it's new improvement and the features and everything and I think that from like it's your expectation but from my point of the trainer I can say that you after this training you will be having a fair amount of react knowledge and so that you can start building your own react app and I think it will help so hi jack up can you hear me hey yes can you hear me yeah yeah I can awesome awesome, gam can you make me maybe co-host so I can do a bit of management of the call but if that doesn't work no problem how can you make it cool for us to start building fast things don't really know somewhere on zoom but if it doesn't work then never mind I am in oh okay I see okay may I can you give me two minutes to also introduce myself yeah sure thanks so yeah I'm Jacob I'm also working for KitKonsom and I will be assisting Alok in his training today and after that giving my own training on Volto for you guys tomorrow first of all thank you and my task for today is to keep track of the chat and keep track of the questions and stuff for you guys to lift a bit of work of Alok so yeah that's how we'll do it then take it away Alok yeah I think so so should we Jacob should we go through individual introduction or should we start like you can go ahead go ahead with your training yeah maybe it make much more sense so what will we do in the training so in training we are going to look into the react library and the create react app for the boilerflit for starting the project and a package manager for for managing our dependency it is called yarn but I am going to use npm because in Apple M1 I have some problem regarding node that's why I am using npm but if you have yarn you can do it and we are also going to look out the redux because redux is the state management library for the react like if you have a bigger app then you also want to maintain some global estate into your project and for that we will use redux for navigating around your app we will be using react router it's also package which will provide you certain components so that we can navigate your project without reloading it so that's why we have the single page application like when once the app is loaded we are not going to reload the page at any point so I already said you like what was expect like at the end of course you'll know how to write basics application using react redux and the react and so yeah so you can also go to this GitHub repo like here in this GitHub repo you will find all the training chapter in each branch like go to the clone the repo and if you install like once you clone the repo you can go to the individual brand and where you get the code related to the that chapter and it's written in such a way like if you complete third and when you check out the fourth and all the code from the third branch is already present in the fourth so you do not have to do back and forth and anything so if somehow you like do not manage to keep up with us keep up with us then we can do that thing but I assure you like I will going to also write this code like which you are writing during the training and I will wait for you to finish and everything and I will help you with all the errors and everything we get so I will be live coding with you so that we have this interactive interactive session for the training but for the future like if you stuck out you are going the training with your own you can just do on it and get started with it it's just for the reference nothing else. So we go to the training. So, on 2020 clone konf I also get the training on the best on the same thing the react and here is the recorded video. And I think that after this training also the video will be uploaded on the YouTube so that you can so that in the future you can go through the training at your own pace, and you do not have to keep up to pace with us because we have to cover lots of training in just four hours so we'll go so go fast so maybe in the future you can go through by yourself. So in the training and after the training I will update the video link here once the video is uploaded so you do not have to worry about anything we can just go and you have the same training which I am giving today. Let's start with our the main fundamental starting point of the training like bootstrapping a react project. So installing dependency like what is needed before we bootstrap a react project. We will be using npm npm for managing the node version and why it's important because like you may have different project in your locally right and all of them depends on the different node version like node 10 1215 16 or the latest one. And since you want to change the node version into your particular file or folder to just use npm and npm is very simple to just install a specific note version into your locally and you can get to start with that project. You do not have to go and re download all the node and node boundaries whenever you change your project. So will I think that you already have this node into your node latest in your locally and if it's not please install it because we are going to use it. And we also have a package manager yarn for installing the dependency as you know that like react as we know that like redux is a dependency react order is dependency and for teaching the library from npm will be using this package manager. So first let's get started with like running this command npm. The app my app I already did previously because in because it takes some time to install but I will give us like one minute to just run this command into your into your terminal like npm. npm to create react app and your project name like whatever I, I think that I have used like this, my Volto app, or you can type whatever my Volto first step or my Volto second app for anything. LPac Shutdriver like this, under the brale and under the should be a thermometer. Myري hands, and علي to see it come ›我覺得, Дад Malee RAMI� sarà compr예 n Create aTHALSA A stro-m jakiś pr수� rights ailand ژ ژ ژ ژ ژ ژ ژ ژ ژ ژ ژ ژ ژ 로 raszam ઁય ઍજઝ ়ફ� clocks � aplod � garden keys ੜੜੜ 你 certi şey К 먹을 Then I think I see three done on the chapter and I think everybody should be able to do that Okay, so after that go to CMI. And open it into the code editor like whichever code editor you use. Can you see my screen? Yes, we do. Okay, so once you log in, once you went to your code editor, you'll see this directory like your create react have already created some files and folder for you. And once it's folder is the node models, we are all the dependencies is strong. You can see like we have all lots of dependencies which already came with the create react app. Then we have public folder, which is used when you deploy your website into the server so that any people can access it. And then we have the source directory and the source directory has all the folder and all the files which we create and and we will work into this directory only. So the pre pre file which come from the create react app is first the app.css, which we use for styling our app app.js is the app.js X, like which is used for our interface. And we use for our entry point of for our the app like so this will be the with our first component which is going to render from the react. And then we have this index of JSX, where we are just importing the app component. I'm going to explain all of this in more detail. So let's see. So I will go into explain it but let's see like this is the folder structure which you are going to see like once you went into the running the project, like when you go to your app and you just this and you run this command. So, but this is the directory structure which will get when you first create the project using the create react app. After what happens when we run the project so I'm using npm you can use the yarn. So, this is the thing which we see once we run the yarn start. And this is the component which is getting rendered. So you see like we have this app header class name. And then we have the image with the logo. So maybe let me show you. This is the logo. So this is the logo which you are seeing. And then we have this edit line like it is app.js and save to reload and then we also have a link. Like when you click it goes to the react documentation. Are we good like till this point, I think so. Okay, so let's move to our first like what is a react component. Okay, so once we create our app using create react app the first component which is seal or the first file which we are seeing is this thing. This app.js and what is it. So in react, we write a function and function returns some HTML tag and this tag is get rendered into the DOM or into the browser and this is what we see. So first we are rendering this div and div contents a header. header has an image and image has a paragraph which I already saw you like seen you like this is the thing which is like currently rendering. And so this is the thing which is currently rendered into the browser. So first of all, like why we are writing this all think this HTML into the JS and why it's happening like this like we are writing JSX into this. JavaScript file and why it's that so reacts is a library, which give you some method. And using that method you can create your own user interface. And for writing your user interface we want a template engine for writing and understanding the user interface more simpler, because react provides you a method called react dot create element. Which does, which creates the element of your UI on to into the browser. So react comes with a JSX. JSX is stand for the JavaScript XML. It is a temp. It is an engine, which lets you write the HTML code or HTML tag into your JavaScript file. So you can see that like we are writing the HTML into the JavaScript file. And if you coming from the background of the creating your web browser or website, you know that you will use a HTML file for creating the user interface. But here we are using where we are using HTML into the JavaScript. So you can see that hey, we are writing the HTML but it's in the JavaScript but let me remind you this is not the HTML. It is the JSX and JSX is an engine which convert this HTML tag into the react elements. So let me show you like how this HTML tag converted into the react create element. So for that react uses a transpiler known as Babel. So let's get know about what is Babel. So Babel is a transpiler that convert the JavaScript into the old ES5 JavaScript that can run in any browser. So what does Babel is that is like it takes your current features of ES6 or the current developing to the JavaScript environment and convert this JavaScript. JavaScript ES6 code into JavaScript 5 so that browser can run this code into any browser so that browser can run this code. So let me show you like what will happen if we convert this code into the what will be the code generator once we transpile using the Babel. So we go to the Babel and here if you type you can see that like like you have written some JSX code like the class name and it's gets created into the react create element. So maybe we are thinking that we are writing the HTML but no we are writing the JSX a special template engine which gets converted into react dot create element class. And this is the method which is used by the react for creating your HTML attribute to the DOM. So whatever HTML tag which you are writing is get first converted into the react dot create element reacts create this HTML tag and this tag gets rendered into the object browser. Does it make sense everybody. Yes, it does. Okay. Now, once we understand that like we can write the HTML into our JavaScript means into our JavaScript file. So this is our the first exercise we are going to write a question like what does the phone foundation do and we also write the answer this thing which you can copy paste the same thing. And the first exercise is to render this question and answer here into the browser. So this will be your first exercise and you can, what we can do is like just replace this code, remove all these things because we do not need. And just write your just write your HTML code which you learn or learn from your previous experience and just so this question and answer. Once you are done, please tell me like in the chat like here we are able to complete the 4.2 exercise or just write just write done 4.2 in the chat. Or if you are a stuck do not worry I will going to show you the solution I'm just giving you one to two minutes or three minutes, because it just like to write HTML tag. Put the question and answer. If you have any questions, please feel free to ask because we're not that many people in the training so we actually have to take care of in terms of questions. Jacob your connection is wrong. You can't hear it properly. You just have to write a HTML tag put the question there and like just the heading or something and put your answer into a paragraph or something like that. Just do that and level to see that in your browser this thing here. Is my connection now better. Yeah. Okay. Yeah, what what I said is that everyone should feel free to ask questions because we are not that many people in the training, which means that we can take care of individual people here. But first, I just make sure to have the answer is correct, any parts should read this. Trump drives. conteúd <|th|><|transcribe|>~!! wify e regeneration ote y convention observations. wides command. зов undcopying trees- if you are still a scout. ʻʻʻʻʻ ʻʻʻʻʻ ʻʻʻʻʻʻʻ, like, can anybody confirm like whether they are done or should I show. ʻʻʻʻʻʻ Okay, so let me show you the solution. So what I am what I am telling you to do is like remove this all thing like which I already said, remove it, write another list. Right, and then you have to put a li. And I am just writing the HTML so you can see that, but it is something different. So I just use an s2 tag and I copy the question. I push the. Here, then we have to use an p tag. We put the answer into the p tag. Right, we create another ally. This is the same thing which you which you might done if you are creating a website and we have to show a question and answer list. We will do the same thing like you create another list and s2 for the question. And paragraph for the answer. Then we'll save it and we'll go to the Chrome and here we are. You see now I have a question and the answer. I have a question and the answer. You can also see the solution by clicking here, but I have to show you the answer. You can also see the solution by clicking here, but I will not recommend, but if you still stuck just copy it and paste it into your app like just remove the function and put it there. And you can also remove this thing because we are not using it now. Are you able to do that? Like anybody in the chat confirms that like they are done. Okay, Simon. Anybody else? Like if they are if they are following. Okay. Okay. Okay, Stephen, can you can ask your question by just unmuting yourself. Okay, so you're basically saying that. Yes, I was just saying that I'm done. Not that I have a question. Okay, so this is your first react component which you made and it's now live and you can see it on your local host 3000. So at the end, this is the same thing which I explained earlier, like create reactive create some boilerplate and just X is a special format where it seems you are writing a serial code, but before execution, the sources first transform to a valid JavaScript. The same thing that Dave you will be on the other tax in this code are first transpired into a valid JavaScript code using the function react dot create element and create automatically does it for you because it's already comes with all the buildings like the web tech and the babel. And at the end of the code, like at the end of a file, you will see this export thing. And this is the ES6 model export. So if you do not know, like how the how you can export your file from your from your file, you can go to the ES6 model documentation. Here JavaScript model to see to know like how it's work how you can export but mainly there are two type of export. The first one is the default one, like you can default your file using this thing like export default app and another syntax will be like export this thing and add right but this is this will be called with the name import type because like when you export a named named function you can only import it by using its name also like once you import this user using this syntax app, you can only import it using this syntax by using the same name like app. But if you export it using default export what you can do is you can put any name here like banana from dot app like from any file. So that's the only difference like if you are importing export default, you can use any name but if you are importing and a named export you have to write a curly braces and then that it makes sense. I think so. That's the only thing that you need to know other than that it will be fine like if you go through the documentation you learn lots of things. Right. So now now we are going to a style out component. Right. So how you can the first thing is like when you started like creating a website is like create your skeleton for the UI like which we created which we created here this thing and now you're the second thing will become to come to is a styling how we are going to style this component for a styling component we do not have to do anything right you just created dot dot your file dot CSS file and you'll write all your CS code there and that will be get applied to your HTML code. So it is the same thing like which you write into the HTML like define your class into the CSS file and use it into your app.js file. So let me show you like if you want to style this thing. So that's it that will be your second exercise like a style the component so that the dot on each item is removed and the question is underlying. So what you have to do is like you have to remove this style this thing the dot thing and put a underlying just below this s2 tag about just below the question just do that like what do you have to do just go to the app.js file remove all those things you do not need it. Just write a class name here a class name like here a class name like on the list item a class name on to the list item and write your class like the CSS declaration into this file and you are done that's the simple thing which you have to do. I'll give you like two or three minutes because it has some quirks at the end like please do not see the solution first try it yourself then I will show you like how to do that and then we can go through the solution and discuss everything. And if you have any personal free to ask like are you able to do that like remove that dot and put the underlying below the s2. Give it Marcus. Okay, so we already go I already gave you like two or three minutes so let me show you like how you can do it. So what I told you to do is like just write a class name. Okay, so if you are writing an HTML you are going to write this thing like you write this class right and after writing the class you provide your class name like whatever like here we will use the FQ item. Right, but since you know that this ally is not a pure HTML tag, this is going to be converted into a JavaScript object and JavaScript already has a class attribute like a class keyword. So this is the reserve keyword into the JavaScript you cannot use it in JavaScript anywhere. You can only use it for defining a new class into your JavaScript file. So instead of class the JSX provide you to write it class name. So you can declare a class using this keyword the class name into the JSX you cannot use the class otherwise it will give you an warning saying that class is a reserve or something like that. So what I told you to just write a class name and you can put whatever you want but from the training perspective view I'm going to use the FQ item. And for the S2. I'm going to use the class name. Question. Okay, so now we wired up our the class name in for the ally and for this to we have written the question now we are going to app.css. Now you have to write FAQ item. Let me copy it so that. Let me copy it so that. FAQ item. List. Style. Name. Save it. Save it. So we have this FAQ item. Okay, so we also have to import the. CSS file. Both. Also as. App.css. And we also have to put the class name. Here. And for S2 class name we have to put this here. So once you do that you can see that like we now have this the dot is gone because we remove the list of style thing. Now you have to know I have we have to put this underline below this S2 tag. So what are the class name the class name is the question. So we just go to the question. And what will we do like this. And what will we do like text decoration. Underline. You see. Now we remove the. Dot from our list item and we also put the underline below the S2 tag. This is the exercise like a style of component so that the dot on each item is removed and the question is underline. Everybody is done. I think so. Yeah so there is a question from the markers like did it did it but what's with class to yeah sometimes but okay so let me show you. So I put this class. Put this class. Let me show you like. Go to your react app. Go to the console. Did you see in the console you are it's working because reactive able to defer that like a reactive smart enough to figure out that like hey class is a invalid on property because class is a keyword into the JavaScript and you are passing the class in a for a dormitory class. So what do you it is telling you is that like telling is like it did you mean class name so what is does behind the scene is like when you pass a class it gets converted into the class name and it gets applied to your list item or the S2 item but it can but it will already be done. Give you an warning like hey this is a invalid on property because class is a key word into the JavaScript. Okay did you understand markers. Yes thank you. I just tried to. Yeah yeah it's what but the reactive smart enough to figure it out is like classes not a dorm property or an attribute. But the only and only and sometimes many many many many cases where the cycle is very. And all both voices. Your connection not. Okay so we get converted into the class name. Okay so now you understand this part right everybody I think so. Okay. Understand okay. So let's go move to our next chapter like convert to use welcome to me. Okay so. Okay. Okay. So let's go move to our next chapter like convert to use welcome to me. Okay so now when you came to hear about the react you already know that in react we create a component right so you already everybody's talking about a react use component we make component we make button component we make alert component and then we combine all of them to create a user interface. So into the similar way in this app we are going to create a component. Right which is going to receive our markup like since you can see that like we are using this repetitive things. I can show you into the previous into the create react component if no create a component into the exercise thing like let me show you like you can see is that like you are using markup twice like you are creating a ally. With the s2 and the p tag with the s2 and the p tag on the same in the same component so what we can do is like we can create a component which only contains this thing. Like it only has one job to do is like like only render the questions and answer of the particular list. So this is the component nature into the react so react is that you can refactor your own markup into multiple components like into the multiple file or you can say it into the multiple component and you can use that component to do a particular thing. So here we want to create a component which only job is to do is to show the question and the answer. So how we can going to do that so go to the this thing so you can so this is the this is what this chapter is for. Like to reuse the markup the FAQ item we will split up the code and that's the thing which I'm talking about like we are going to split this ally and ally s2 and the p tag for using our markup because we are duplicating it for each question and the answer. The app component will contain just the data of the FAQ item. Like the for the question and answer we only stored the data into the app j6 and will render a newly created react component called FAQ item. The data is passed to the sub component using properties. I will go and explain what is the properties in the FAQ item will have to access to the properties as a prop that question and the app your support will change to this. Okay, so let me first like do not read this you can read it like on your own. When you are going to training by yourself so let me explain what we are trying to do. So you can see from the code is like we have created the FAQ item component right and we are only passing this thing the question the answer the question and the answer. So this question and the answer thing which you can see is called the property. It is same like the HTML data attribute like for the image tag you must have seen in image source like. Like if you saw the image tag like you can pass a source attribute I ordered tribute. That's the same thing which you can pass for your component. Right so component has their own properties you can write your component properties name whatever you want so you can write in the place of question you can write in the place of question. You can write whatever the variable you can think of here. And the second question will be like hello how we are going to access these properties into our FAQ item. So FAQ item function receive a argument called props and the props has all the attribute which you pass to your component. So here FAQ item pass to property. One is question and the second is the answer. Once you create the FAQ item function it receives an argument called props and this props has all your properties like the this props has a question properties and answer property. So you can get access to this property. Just using props that question or props that answer into the into the FAQ. Item. Does this make sense like which I explained you. Okay. Okay, so I think so. Right. So here comes your new exercise. So just copy this code like we are going to the let me show you like like you have to create just another exercise then I'm going to show you all at the once like when I will code it. So here you have the exercise create a FAQ item. Like just create a file into this thing components last FAQ item so let me just create it for you like just go to the source create here create a components. Okay, in the components create a FAQ item dot j sx. Okay, so just write a function called FAQ item right. Same as the whatever you see into the function app right. And just import this FAQ item like this way. Right in this way and will remove all this code and we only use this thing. So this is the thing which you can do like you can also copy pasting because this is the thing. Okay, so we created a FAQ item dot j sx. You have to write this function like which renders the question and the answer for the particular question answer. And this is the app dot j sx which received the component from the FAQ item and it will pass a two properties one is question the second is answer. The question and the answer and you have to write a function into the FAQ item dot j sx file and write it like FAQ item. And here into the function you will receive a props and based on this props, you can just render those markup and you will able to see your question and the answer. Just do it like just try it. Yeah, so Simon Simon has a one question like I just noticed that we do not need import react. Do you know why yes, because the current react which will might be 17 or 1817 or something like this. It's already comes out the create react. Already comes out with Venus New plugin which already internally it already install into train, like, when you are creating react, you already start udah joking oy 있습니다 닝 sey 還是 y s ki ep Woman ament име па Hey. This file is theendoffab and that's Давай Direkt découv browsing automatically does it for you because ché chose that hey if arrayOnlyrend], hi Sportak. for you because in Korea try to stress that hey if everyone is which is going to create this file like the JS or the JSX this file is going to be transformed with this plugin the JSX transform and that's why you do not need to write now to now we do not have to write this react anymore. That's Michael who wants to see the FAQ item file again, can you? swings w ğ fl à injured chops obylutta adop వాలు many, you have to do this thing లారని מײార్ఎంంిదికోచథ రుత్కంిన్.örung�ౚరుారుదినేవార్ఒటంికంామేదా. పౚరి పౚరేంిలుకుత్త్తు. నగరలిహి, is the determinantrying cons quelques many many, there is the和jac Just write a simple function with the name and just use our previous markup, the s2 tag, and the p tag שuns限 �이크업 d â e 나눅g Savac ctions bära. ɤtindo 📢an DroB sstd is s contractors spe Bobby ʃet, voul tas lo ēm dä âaewash t diferença ʃ sergeat al рассказыв adding definition ss dak errors dan strijät creepy ʃaɑb ss invit n g pen ending ʃilte an vois t stabilOP r Because I know there is one problem you are going to face. Everybody done? I at least saw one thumbs up. Tomas is done. Yeah. Let's give it another minute. Then we'll continue. And if anyone objects, just scream or spam the chat. By the way, is my audio better now? Yes. awesome okay so let me go and show you like how are going to write okay so what I have told you like just write a simple function like fx q item this thing and I told you to just return a simple markup with li and li has this s2 thing and we have the p thing right and this f2 items receives a props argument it is already it will be provided by the react like when you write it like whenever you call say create a component and when you call this component into the fx q item this all these properties are going to be present into this props argument which will be automatically passed by the react so this is the thing which I explained right okay so now what we are going to do like how we are going to access this question so we can write it like that like which I told you like props.question but there is a one trick like this props.question is a is a java script it's not a valid stml dom attribute so in jasex to access the java script you have to provide this curly braces once you declare a curly braces you are into the java script land once again so if I just put curly braces here in this field like with your seeing I can do any java script expression I want like I can call a function like we let's have a let's think we have a add function like cons add a b return a plus b you see we can it's you can call this add function here also so once we declare this curly braces we are once again into the java script wall and you can do all the java script expression here so what so maybe you can thinking of okay hello what is expression so expression is not expression is just the one thing like one particular value like which is returned from this add component okay like like in this java script wall you can call any expression which is presented to the java script world and then we have a question like what is expression expression will be any literals any variables any function call which result into a single value so here when you call this add function this add function returning you a single value so we can write this function but you can't use any async operation like cause you can't use a promise there right because this is not a java script expression are you able to understand it like what I'm trying to say about the curly braces just remember one thing like once you declare this thing the curly braces into the j6 we are in the java script and you can write any expression you want an expression contains variables literals operators we can also use and operator or operator whatever that result into a single value we can use that because this and gets evaluated as true false it does not get evaluated like in the promise or something like it goes to network and make a request and anything we do not do it here okay can you hear me yeah okay yeah in the app js there the the component is written differently with function app and then props and in the fact I am in the FAQ item you use the error function what's the difference in declaring the components this way and why do you use which one okay so as you know that our new year six features provides you this error function right it is not available into the previous version of yes yes our java speed so previously for defining a function you use this thing right you write a function keyword and function name you provide an argument and you do this thing right and when you write this as a props component like a arrow function it does not have its own this all belongs to this this thing right that's the only thing because the error function does not use the binding of the this thing and a lot I think that this keyword is reserved for classes and the arrow function is just more more easy way or quicker way to define a function doesn't it okay then then I'll shut up yeah so the main difference between is that when you write a class into a javascript behind the scene it's right a function maybe it's a long topic but maybe you maybe I can show you like when you do this class my component like which you do into the react in the sense sense react dot component this thing right you use this thing right like for defining a class like into the react so beside the like into the backend when this code transpiles it gets converted into this thing function my component and now we have this dot all the props or the method use three and whatever the method you write like on click it's get converted into the prototype like my component like let me type if I am able to type my component prototype dot let's not get too deep into this because stuff in front of us okay so let me let me just give you the simple simple definition what is the difference between the function the main function and this the first one is that like this function component like when you declare this thing this function it has this own scope and it has the this keyword binding like when you create a initiate with this thing like const my app equals to new app right right Thomas then what we'll get is get will be the this binding like whatever you define between the function it gets preserved and you can accept it using the this you also get the constructor for that function you also get the methods like you can write the method into the into this function like you can you can go and study like how you can write a method into the function it will be like simple like this dot on click right but you can you can't do that into this the arrow function just a simpler way of writing a function without you have to deep dive between anything like the this the prototype and everything does that make sense it is a broader topic like what is the difference between arrow function and that thing does this make sense yeah you can just go go to the link provided by the jack up and you can see the difference two more questions I think first is Michael asks how we created the ad function I think the ad function was only to as an example on that you can define any function and are not any function but any expression and call that inside that that's not necessary for our current task as far as I'm aware and second is Simon wants to know whether the arrow function have any advantage over the normal function definition in terms of performance along to you no anything about that I think that the arrow function might be a faster than the main thing the normal function like when you write because when you define this arrow function it does not have to look into the other space because it does not have the this and you can't define the prototype like you can't write a per cube item dot prototype if you familiar with the JavaScript prototype maybe you can look at like when you search like when you have any string and you does that like a string like a low dot to uppercase right and when you call it convert this to the uppercase but how this dot to uppercase is executed you do not define it anywhere into the into your file so where does it come from so this this come from the JavaScript wall like this is thing is an object into the JavaScript and this is thing was already a built in method called to uppercase and then this method is getting called when you run this JavaScript and that's how your this thing become an uppercase if you want to implement this type of thing like you can create an object and that object have some built in method who can't do you can't do that with the arrow function that's why I think that arrow function will be more for promise than the normal function but it does not make any sense to compare this because it is pretty much mature optimizes and you can do you can write anything okay I think that I I have given the answer for the Simon so let's proceed like what you are doing is like we have we are getting into a tickle item a component and we are passing to props a question properties and answer properties and in FQ item we are have the props now you have to just access this props so this is the curly braces for going into the JavaScript one so I will just write props dot question right for accessing the answer I will just write answer okay and at the end I what I have to do I have to use the export default effect you item tie so what we are what we have done like we created a function like a component function receives an argument from the react and all the property which is passed to this function we can access is using props that question for writing those property into the j6 word we need to get this curly braces to go into the JavaScript wall and finally at the end we have to export this function because that's the only way we can access it using the FAQ item once we get the FAQ item from the FAQ item dot j6 we render it into the this way like this thing we also have to remove this and we just have to be saved and let's see into our reactor so you see now we are entering it right and let me show you like what happened if you do not do this you can see like attempt import error does not contain said this for default export so you have to provide this so let me make sense anybody like how we do that and why we do that okay so let me tell you like why we do that like previously when you have this component we have this repetitive ally thing and repetitive s2 and the p thing and we factor this thing this marker we do not have to write it multiple times we just created a small component what it does it just render the question and the answer and we just import into the app and we just send the data it required and it works this is why reactive efficient and this is why reactive is popular into the modern era because you have very complex component you have any checkout button and when you click on the checkout it goes to the strike right multiple sidebar right we have multiple like we have an opening model like when you go to any modern website when you click on to any image just pop up how you can create that using a normal spml you can't you have lots of bug so what we'll do is like we just create a separate component for the for an image we can create a author author component for the for the button we can only create a button component like here and whenever you want a button just import your component and just write button like button and it will work right that's why the reactive efficient and the and it is becoming more and more popular because it making us easier to do the complex tax in a more manageable way like you can just take a one piece of the complex problem solve it and you can just render it on your screen. Now you understand like how why we extracted this logic and created a new component I think it does make sense right now. So, there will be another exercise just for you, like it's on the same exercise, like just copy all the CSS file, all the CSS which you are written into the app.css create a FAQ item.css create a let me say like create a FAQ item.css file into your component file, like go to component FAQ item and FAQ item.css and put all your CSS into that file so that we can once again remove this dot and enable this underlying between the s2 tab just do it. This way we understand like how styling a component will work like when you create a multiple component just write the CSS file for each component and you just call it into your app like into your component and it will work. So, you just do it like it does not take long. Okay, so Simon have one question like if we separate our CSS into multiple files. So, do we have to watch out for name classes, yes, you have to watch out for the name classes. And that's why we, you can see that there is multiple discussion is going into our community like where to use CSS in jazz, where to use the style like style CSS, or where we just have to write this CSS or dot less file, because it already provide you mechanism to generate each your class name uniquely, or you just write to your own dot CSS structure like which you are doing from previously, like you just create the style and keep continuing with your writing with your class name. So, yeah, you have to be be careful when you declare the class name. Yeah, when you, when you are using multiple CSS files, there might be situation where there's an absolutely valid solution. I think when there are name classes in the end, the rules for the last imported files will be applied in the end, but you need to test that for each case. Yeah, but for in the model solution like when you are creating the your whole app like for creating a small app you do not have to think about that because you already know like what CSS class you have declared, but if you are creating a bigger app like or something like that, you mostly going to use some library like whether it will be the SS CSS dot less or a style component or CSS in jazz. So you do not have to do anything. Yes, we are you are going to create a new CSS file FAQ item dot CSS. And there you are going to write all the things like in go to your component folder, create a new FAQ item dot CSS and put all all your CSS from the app dot CSS into this file, and it will work. Okay, yes, we are going to remove that CSS file. Yes, we are going to remove it because we do not need it. Okay, so let me show you like what I am telling you to do. Right, so we have this component FAQ items will just go and create a FAQ item dot CSS file. Okay, we're going to app dot CSS, we are just going to copy FAQ item dot CSS. I haven't done anything, but if you go to react app, who it's not working, it's not working because we created the FAQ item, but in FAQ item we are not importing the CSS file so what we'll do is like, we just import it. Import dot FAQ. FAQ item dot CSS. I missed the slash. Okay, I also missed the class name here class name should be FAQ item. And class name should be question. And once I save it, you can see it like the dot is gone. And now we have the underlying below is to everybody able to do that. Like, everybody able to be at this situation now. Yes. We haven't done anything we just added the class name the same which is previously like a big item for a styling we need to provide a class name. So we do that we did that and we also created a FAQ item dot CSS file, where we declare the class declaration for particular name, and we save it we imported. That's where it gets. Okay. So now we move to the 6.3 like property validation. Okay, so you know that like what is property like whenever you create a component. This question and the answer is called the properties for that particular component. So react has a built in mechanism to validate these properties being passed into the component. When incorrect values are passed will receive a warning in the console. In the ever example you have to add an extra import. So what is this thing this property validation and why you use that. Okay. So now you are have this question and the answer right you are passing this property and this FAQ items are receiving this properties the question and the answer. What does this component is doing this component only has to render the question and the answer. What happens if we pass an author name like we passed an object like a question object. This is like let's sing a look this is my name. Right. And when you say it right. So, Okay, so here what I am doing is that like this FAQ item have this properties like the question and the answer right and what happened if you provide the properties of the property. Right the properties of the FAQ item question to be an object instead of a string. What will have to happen let's see, I can go to the react app and you'll see an error. Right. Okay, so why it is that like since when we are creating a large app, maybe this properties will be coming from the backend, right from the backend or anywhere from where we are fitting the data whether with the serverless cloud or anyway. Okay, so what we want to do is like, we want to check the properties of this component, like we want this properties to be a particular value, like this question property will be always a string. This answer property should be always be a string it should not be an array, it should not be a null, it should not be particular thing which you do not want to do that, right, which you do not want to receive that. For that for checking this behavior like whatever the FAQ item, getting the properties from its parent whether it's the same thing or not you have to write a validation argument and in react we use this validation using a prop types dependency. So this prop type dependency provide you a certain method which is used to check whether the component is getting the right properties or the not. So, we are going to write this validation so if we are going to write a validation like if our component is getting a wrong properties, which you do not want will see an error into the console. So what we have to do, like, we have to import a dependency dependency is prop types. Okay, so this prop ties gives you an, and gives you a, but a object called prop types and this prop ties has a certain method with checks particular value whether this particular value has the same. Same type or not. So for checking the proper ties, you have to declare a static function. So this is the FAQ item dot. Prop types. This thing, which you are seeing this, I have to write it in the correct way. types. Okay, so you can see that this, this is the FAQ item right this is the function. And whatever this is the function and this thing dot prop type you see thing this is called the static static method for that particular static method for that particular function. So this thing will be get run by the react internally when when this component is getting rendered. Let me show you like how it's work. So once your app uploaded into the browser, this FAQ item component gets called. Once it's called the first thing will be happen is that is this static method gets called. And this method will be checking all your types. So where we what we want to type what we want to check, we want to check that the question is. This thing. And is required, like, whenever you want to render a FAQ item component, what you have to do you have to provide me an string with the question properties, and it should be there, it should not be an empty. Right, like you can't render a FAQ item without a question props. We also want to go and answer. And answers would also be a string and is required. So this is after this is and what it does is like when when this after just calls this FAQ item function is first going to run this this method and what this is going to do is like it check the question property so here it is going to check this question properties, like whether this property is a string or not. So if the properties is not a string, it's going to give you a console error. So let me show you. So if you go to the console. Okay, so let me let me show you something different like letting first fix this. Then I can just show you. So this is the thing you see we do not have console what happened if I provide you a different thing. Like let's say you provide it and add it. So this thing, and then just pass an object with the same a look. My name. Okay, let me find you. Okay, so you see in warning you have this thing failed prop type invalid prop question of type question of type object supply to FQ item and expected string. So this is why we are going to find is like why this app broke or why we are not able to see the desire result why because in FQ item, we are saying that question is of type of string should not be anything other than a string, and that's why React is able to show you and warning here. Hello, you are passing an object object type to the question but the expected should be string. Okay, so we are getting those warning once we removed it. Once we. Okay, so once I remove it. I can show you. Once we remove it we do not get anything like let me replace for you so that we can see, you see, we do not get any warning. That's why we are you will see the prop type written in every react component. So you will see the properties and what should be the value of properties, the type of the property this component use it. So you understand it, this prop types. What is this prop type is useful. So do you move to the next chapter. I think so. I have a short question. Okay. So, you wrote your FAQ item as a JSX. When I do that, I get an compilation error. I need to use a.js file. Don't know why. Like, you didn't the same thing. I think functions, it's all correct, but when I named the file as you did as a.js file, I get an error like the fjs does not find cannot import, which is funny to me. Can you paste your error into the chat. What would you like to get pasted. Yeah, so that I can see like error message, I think. And if you want, you can, you can just go there. And you can also go to this thing and you can just copy and paste your photos. It's just simply like it doesn't find the FAQ item JavaScript file. Yeah, I need to restart the npm compiler. No, no, no, no, no, no, no. Yeah, no such files or some FAQ item. Are you importing FAQ item into your FAQ item.js. Can you see. Can you import the CSS what it will be. Yes. No, no, that's not the problem in app.js. I have to import of the components FAQ item like you have no problem at all. Just remove.js thing at the end. Just remove it. Wait a second. I have no JS in the end. Just the same as you do. Look, as far as I understood the problem is that it works fine when he calls the FAQ item file, the file let's say FAQ item.js then everything works fine. Right? No, no, no. Correct. That's the point. Yeah, and it breaks when he renames it to.jsx with this super weird in my honest opinion. Yes, it is. I didn't write. I mean, you can you can just use.js instead and it'll work fine because.jsx and.js are basically the same. But why. Maybe you can try to restart and maybe you can. But for me, I do not have to do anything. You can see it right because Reactor already uses fast refresh. So whatever you make the changes just compiles it for you. Okay, I mean, it's not a big deal. It's not a big problem. Just was curious what the problem is. Okay, now it's a runny. Yeah, you are able to see that. No, no, I just have the same issue again. I did an NPM. I restarted the NPM server. So still the same problem. But don't matter. Don't care about it. It's a minor bug, whatever. Don't care. Yeah. Yeah, maybe I can do some research about that in the background and get back to you. That would be awesome. Thank you. Okay. Okay. Okay, so now we move to our next chapter, like using this, not a snaps or testing. So let me show you like what we have done till this far. Like we created the app component, right? At component uses a another component for the FQ item, right? You have written it. Then we also style this FQ item using the CSS. We also validate the prop types, like what the prop types should be and what is the type of the properties this FQ item. Item should have. Now, to make our app a better, we also have to write the test so that we can be more confident towards our next changes, which we made after this thing, like after the app improves, like to launch your app. Like 1.0. And now you want to refactor it. And you do not have any test. Then you do not know whether you broke your app or not. So for that, we should have a test which tells you like, hey, you made this changes. And this is why this feature is getting broke. And once you have this test, you will be able to see that and able to fix it before you release your app or your library or anything. Does that make sense? And for the simplest, we will be going to use the SNAP sort testing. So let me explain you, like what is the SNAP sort testing is all about. So what does SNAP sort testing do is like, it gets your component and it just render your component and store all the JSON, all the DOM attribute of your component into the JSON structure and save it into the file. And once you made any changes to your component and when you ran the test, it will regenerate your, it will re-render your component and regenerate the JSON and compare with the previous JSON. And if it does not match, it will be failed. And this failing test will let you know that like, hey, Aloka someone, like, hey, you made this. Like, hey, you made some changes. Do you really want these changes to be get applied? And once you say that, yeah, I want this change, then you can just update that SNAP sort and you just push to your wrapper or whatever, whatever you are managing or developing your app. So what does SNAP sort does is like, let me give you an example like how it will be helpful. Maybe think instead of this question answer, you have an object, like you get a data from the backend and you have an object with three things, like maybe you can code it like here, like this thing that this thing like code. You get an object and object in object we have this author and the book, right, and you have this author name and the book name, right. You have written this author and the book name and suddenly what happened is that is like someone came to your code and it just instead of this object, it chance to be added. Right, or someone just came or write and test which does not include into your mock up or everything. So what will be happen. This Eli thing is like let's see we have an object or any of the two book, book and author. Twice, right, and you have this component which renders the book names and the author name right. And what happened when third people, another person come and just write another book and author. And here, and when he tries to, tries to release this release this code or try to push this code or run the test, the test will fail because it will remind the person that hey, previously there is only two author. And there is one another author you have added, do you really want this author to be added or not. Maybe you are, you are responsible for adding six author right. You have, you have added the six author but mistake with mistake you added the two author twice like the same author twice. Then in test you can see that hey, this author is twice and then you can just come here and remove it. And then you update your test and then pass and you can just send your code to the any GitHub repo like wherever you are deploying or developing. Does it make sense like why you are using a snapshot testing. Okay, so now we are going to what we are going to do is like, first we'll create a file for FQ item test.js file. We'll also delete the app dot test.js file because we deleted all the initial content of the app.js. So if you go to the app dot test dot just where you can see that like we are getting the screen learn react but since we already delete like where there is nothing anything like run react is going to fail. So we are we are going to do me all the app dot test dot just file. And here we'll render the component and assert the markup so let me show you then we can write another text by euro. What we are going to do is like go to the component, create a FAQ item dot test dot just file into the just file you will import react from from react. You can also do not I think that you do not need to do this thing in port reacting. I think I messed up the training then you can render from the at the red testing library. This testing library Dica dependency dependency is coming from the create react app like once you created your app using create react app it already downloaded this library so we do not need to download this live library and then we'll import the FAQ item. It's already out of import for me via school and what we'll do is like we'll write describe. What is this the FAQ item. And then we'll pass a function which will be run once we run the test and we'll run the vendors a FAQ item. Let's pass another function with just checked renders our component and check it. And we also have to pass the properties. You can also copy paste it from the training website, but I'm just writing like, you know, like we can write it like this way. So this is it and then we just do this. Are you able to follow like are you able to write this. I think so. Yes, and then we are going to maybe just kill it. Let's see. Maybe I haven't deleted this. So maybe it gave me an error. So again so and the am run test. Right. And we are matching the snapshot. And you see, the test already created you a directory in your own component this is snapshot and if you open it has this thing. So whenever it runs the test, it will going to match this match this with this object. And if something changes it gives you warning like hey, it this thing has been changed, do you want to update it or not. So let me also tell you like in instead my name, I will write check of name. You see one is snapshot failed. And you can clearly see there in the place of a look we now have jack up. So if you want you feel really want like no, I really want to update this thing. This thing like I really want to update the jack up thing. So I love to check up what you can do is like you have to update this test and how you can do it like just run and then run this minus you. You to update them. Let's see. And you see, when you see like there is you command so you can pass it to that and you can see that that the one is snapshot is updated. So now my test is passed now I can add it committed and I can push it to the GitHub or anywhere like where you handle your project development. So this is why this is important like if you change anything into the description, it gets matched to your previous snap now a snapshot and if it's not there it's give you an error locally. And so you can manually check it like hey, yeah, something has changed. So did you understand like why you are using the snapshot testing. You can write anything like you can check it. Once you are done, please let me know and play. So the way you can also update your snap cross a snapshot. So I missed that how do you update the snapshot okay. So, let me show you once once more like, because I do not want my name to be jack up. I want my name to be a look. Right. So, you go there. Let me run test. So I run the npm run test. So you see one snapshot fail if you go there you see the jack up is minus analog is added. So, you see, let me inspect your plan code changes or press you, you to update them. So you just press you. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, you see one snapshot is updated. So, now we move to our next chapter. Like how do we use a statement tool in your component? Yeah, look maybe should we take a small break. I also think so. Half time break so people can get new drinks or go to the bathroom or just have five minutes of break or so. What do you think? What do you think Chad? Like I think that ten minutes will be fine. Like we can start from 8.30 I think. What do you say? What do you say? What's the time? 16.50. No, no, no the current time. Yeah, current time is 16.50. So, maybe we can start from 7.50. You mean 5, not 7. Yeah, 7 is in like two hours. Just next ten minutes, right? Yeah. Okay, so our next concept will be like how to use a statement in your component. So, let me tell you like what is a statement. Or let's see like now we have a hard coded FAQ. Like if I can show you like if you go to the app.js. Now we have this hard coded question and hard coded answer and we really do not want. We want to add some more questions, more answers and everything and how we can do that. For into that context for manipulating the data particular to a component, we are going to use the state. So, what is the state? A state you can think of, think like a state is a memory related to a particular component. So, let me give you an example. So let's add you have an input field. You want to write the questions submit submitter name. So how we can do that. Like you just want an estate or a memory where you want to store the particular name of that person and you can send it to your database or something like that. You are doing a shopping on e-commerce or Amazon and when you click on to add to the cart and you have a component called add to the cart component. And it just a layer give you the shopping list which you want and move it to the cart component and how does it work. It has a memory, especially to the that component. And this is called the state. So you can think of that is like a state is a living memory of your particular component and each component has there will be an own estate. So this FAQ item this thing and if you define a state into the FAQ item.js it doesn't matter like how many times you call into your app.js each FAQ item has its own memory and this memory is called as a state. Right. Did you understand this thing. Like what is this the state means in react. So a state is nothing a state is just an a memory allocation to a particular component and each component will be have their own memory, which is called state. And for accessing the state thing, we will use a hook call use a state provided by react. So your question will be like hello, what is hook. Right. What is what is this hook or use a state hook or whatever hook you listen. So hook is nothing hooks lets you hook into the react feature, like hook is just a function, which lets you hook into the react feature. And what is the react feature. One of the react feature is that like one of the react feature is this state thing. So you just give you like it's giving you a hook that lets you access this state in the function component. And just remember one thing, you can't use hook into the class component, because class component already have a method called state. So you can define a state function into your constructor and you can update it using the set state method provided by the react. But in function component, you can't access the set state function. And for that this hooks concept is introduced so that in who are in the function component, you can access this feature which is provided by the react. I think now you understand what is the book is and why we are using the user state hook. Because we want to use the state feature into our function component and that's why we are using hook. And hook is the and we can't use the this dot set state method into the function component. That's why we are using hook. That it makes sense, like what I have just spoken. And another thing, like whenever you update the state, your component will be get rendered once again. So you store something into your state, and once you updated the state, your component will be rendered automatically by the react. And those new new feature or new state will get rendered to your UI. We just have to keep into your mind. And if you want to change anything into your component, you have to put those things into the state. So if you want to make some modification into your component, like you want to add like you go to your Shopify and add a mobile like an iPhone 13, and you want two of them. So what you will do to just click on the incrementer and it's incremented twice. That's why so this increment method has an estate and this is the modification when you are clicking you are modifying the state and that modification state the component and then you see the two or three or whatever you want, whatever you see after the number of times you click. So whenever you try to mutate any particular behavior of the component or UI behavior, you are mutating the state and whenever you mutate the state the component will be rendered. It happens into the same into the model, like when you click into any model it just open and when you click the close it just close. And it's all happened due to the state. Does it make sense. I think so. So let's use our state into our app.js like let's see how we can use that and how it's done. So first, like now you have to import use effect, not use effect, use a state from the app. This is the hook. And we'll just write const FAQ list set FAQ list. So let me show you something remember like all the hooks hook function returns an array and this array has two value. The first one will be the value related to that particular state as second will be a setup function which you call for updating the value. So let me show you. So, when you do the let me show you at that point like. So, you use state and you pass an array. And you pass an array this thing. Right. Okay, so the FAQ list has this initial value. So the first state will return to method to value the first one will be the first thing, the particular value of data state and the second thing this set FAQ list will be a function which is called to update so if you want to update the FAQ list, you have to call set FAQ list, and you have to pass the whatever you want to be the value of FAQ list, you have to pass it in the set FAQ list function. Is everybody like understand what I have done there. What is your state and why you are using this. Okay, so now you understand like. So now, now we have this state like FAQ list has this thing. So we might have one to refactor it. So we do not want this question and answer to be here. So what we'll do is, I will just delete this thing, because we do not want it. And to go into like now we want to access this FAQ list so how we can do it, like for accessing the FAQ list we have to go into the JavaScript world, and the JavaScript world will be went by just putting the curly braces in the JSX. So what we are going to do, we are going to do the FAQ list dot map, because FAQ list is an array, and you want to map over it, and call FAQ item for each FAQ item component for each item. So, we have this thing. We have an item item is a function. And what we want to return is a FAQ item component. Right. And FAQ item like I can show you. Let's see. So, the thing is there because we are passing the FAQ item with no properties. Let's see, like what we have done if you go to the console, you can see like the prop answer is mark required in FAQ item but it value is undefined same for questions. Like why we need the prop types because it validates like if it does not provide the question and answer it will give you an awarding so why it's not happening anything you can just go to the console like hey I have to provide an answer or a question so you go to the app address FAQ items. We just did question and you just passed item dot question property. Now, we get the question but not the answer and if you see you have this thing the prop answer is mark required in FAQ like and you can just reload to show it like you see the prop type the prop answer is mark that requires an FAQ item but its value is undefined so you can just go there. Answer. Item. The answer. If you go now we have the question and the answer. I will explain the theme. He's like some sometime later but it's just a way like which you tell reacts like a since we are using the map function and we are creating this FAQ item. The react wants to know want to differentiate between the first instance and the second instance like the first time it's called for the first item. And the second item for this thing like it's called twice so just want react reference so that it can compare if nothing is changed so we'll provide an index. And we'll provide the key for it is complaining. If you want to understand the key you have to go deeper into that. So now you see we do not get any console error and we have the question and the answer the question and the answer. Are we on the same page. Or do you have any questions. Okay, so now here is your turn. Now you have to use the state and everything. So here is the exercise to save a space in the view. We want to show and hide answer when you click on the question. Add a state variable to FAQ item component, which keep the state of answer being shown or not, and adjust the return method so or hide the answer. So what you have to do is like create a state variable, right. And based on this state we want to show and hide this answer. Once you click it, the answer will be shown and once, and once you do not, and once you click it once again it will be hidden so it will be like toggling like when you click it shows when you click inside click hide. And so, so you have to do this thing so it's your turn to use the use a state hook and implement that. So just create a state which keeps the track of this clicking things, if it's true, so the answer if it's false, hide the answer. And if you like three minutes to do that, you do not have to do anything just use the use a state and your code. Did you understand your exercise like what you have to do. I think that just create the user use the state hook. Create a state hook and based on that. So the answer or hide it. In actions. Are you able to do that? Anyone? Nice. Anyone else other than Marcus? Okay, so let me show you like how I'm going to do it. Like what you have to do is like you have to go to the FQ item first of all what we have to do is we have to import huge state from the act. Now we have to do is like const is also set also equals to huge state and I give an enforce. Okay, so what I do is like I imported the huge state from the react and then I use the end to the function like I declared the huge state I provided the initial value and the initial value is the false like whenever you do the huge state to provide a initial value and this initial value is assigned to is answer. So the the answer contains the initial value which is false. And do you know about this thing. What you are doing this thing. This thing, and this thing. This is called the array destructuring. This you can do that they want to spell it. So it is called array destructuring. What will happen is that like if you have an array like this thing. And the first one is the alone. And the second one is like jack up. The third one is like a step and if you want to access it like let's see, like I can just show you. I can do this array and if you want to access the second thing what will do, I can do a one right for zero you'll write a zero right and you put into something like alo equals to a one const jack up or something but JavaScript provide you a shorter function like you can just write const alo ja and you can just grab it from the a. Right and now you can access like this one will be the age you this one will be the a one and if you write another something K, this will be the step and you can do that like why we are using this thing right. Maybe, maybe you understand it right. So why we are doing this thing like we are writing this open and closing books. And based on this answer, I'm going to hide it. So, what we are going to do is like. This is this thing this is the paragraph we want to hide it based on this state so what we are going to do, like you are to going to once again for accessing the e answer we have to go into the JavaScript world. And for JavaScript world will just put the curly braces and for accessing is answer will just copy it. And if each answer is true. What will you going to do is like, we are going to just render this. Does it make sense, like what I have done here. If this is true, then do this. If this is false, if this is fall just return. That's why how this and a better world, like if it's true, it will give you the second thing if it's fall it just circuit it out like it's done, it will do the sort of key thing. So let's see what happened to our act. We do not get any answer, because we are not because the answer is false, and based on the false, it is not showing so if I make it true. The answer is there. You see, I think so. We'll let it be false. Is everybody able to solve this, this exercise. Let's move to the next chapter. Now we have one problem in our app. Like, we want to show the answer once we click on this s to tag, like when we click the s to tag this fall should transfer to be true. Right. So that we can see this answer and once we click it, it should be hidden. So what we want to do, we want to mutate the we want to mutate the answer and how we are going to mutate the answer from the set answer method. So, just write a click handler. We stalk us this is answer variable. So if the each answer is true, it should toggle is to be false. And if each answer is false, just toggle it to the truth. So you just have to write a handler which stalk us the answer straight variable and set a new state using the set answer function. So what is your exercise your exercise is to write a toggle handler like the way you write into the JavaScript, which toggle the each answer state variable and set the new state using the set answer function. So you have to just write a function here to do to do right. Which set is answered to true or false based on previous value. And you can, you can set the value of each answer using the set answer. Set answer method. Set the value of each answer. Set the value of each answer. Set the value of each answer. Set the value of each answer. Set the value of each answer. Set the value of each answer. So did I write it because I think I've given me enough time. Anybody able to do that. Okay, so let me show you like what I'm trying to want to do is that you just have to write a toggle handle. So how we can write it like you I will be. I told you to just write a function. So I write a toggle function. Right, so this is the toggle function and what we want based on the is answer value we want to set the is answer to true or false based on its previous value. So how you can set the is answer from set answer. We call this set answer. And this is the answer. So what do you want if each answer is false we want it to be true. So this is the negation sign what is that is like it's make the true to be false and false to be true. So this is the only answer. So this is the only answer like write a toggle handle in the solution you can see that. And now you want to just call this toggle handler toggle handler when we are clicking on the s to. So for that, we can just call it on click. Right. Let me show you. So what we have done like we added the toggle function, which said the is answer value based on the previous value if it's true. It make it to be false if it falls it make it true. And when we are clicking we are passing this toggle function. So let's see. So we go to the react app we reload so that we are on the same page. And once you click you see the, you see the answer is showing and all. Are you able to do that now. Okay. Okay, so I think that now everybody will be fine like everybody is working. So, the next chapter will be like you call backs to delete an item so now we have this thing right. We have the question and the answer and now we want to delete it. Right. So we are going to the list this FAQ the question and the answer from this thing. So for that, the first what we are going to do is like, we are going to add a button called delete button. When you click on it, it will be going to remove it from the list. So what we what we have to do is like or what you have to do is like, add a delete button to the FAQ item view in the FAQ item.json file and create an empty on delete handler, which is called when button is pressed. So do just one thing create a button into the FAQ item.json write a delete handler for that. And when you click on to the on delete it should call this function just write it. I'll give you one minute because we already know how to write a function and how we can do the calling thing. So we want a button here delete button delete button. Are you able to do that, because I think that that will be enough time you just have to write a button as you do into the HTML and write the same as the top and it should be empty you do not have to do anything. Okay, so let me show you. What we have to do is like, after the answer, we just have to do this button and button should be delete. Then we should have cost on on delete. And on click. Are you able to do that. And this is our have. Now we have the delete button it's not working because we do not, we do not have anything but this is a hard to how it's for right. So do we move forward. I think so it does not has nothing to do right. So now we have to write the delete handler right. So now we have to complete this function. Right. So first of all, this question and is answer is coming from the app.js. So what we have to do is like, first of all, once we click this button, we first need to know that like, on which button, on which FAQ item, or the question we have to click, right. Let me explain you like, first we need to identify like, whether we clicked on the first item or on the second item. Right. So for that what we want, we want an, we want an index attribute, right. And where does this index attribute should come and it should be coming from the app.js. Other than that, we should know we are not able to know that, like on which, on which question we have clicked. So, for deleting an item, we must get an index, right. So, in the FAQ item, we should get a index, right. So, this is the property. So, we can write that, prop types dot number dot is required. Right. So, first of all, what we need is the first of all, we need to know that like on which index or on which question we have clicked, like on which question we have clicked on the delete button. And this can be find by, by using the index properties, like if the, if it's click on the first, the app components should provide me one. And if it's click on the second app components should provide me two. Right. Okay. So now, let's assume that we are now able to get this index function, what should you know, or what should we want more. We should also want a function, which we call when we call the on delete function because we can't delete the question from the FAQ item. So, what do you want, we want an a function, which is provided to FAQ item. Right. And when the delete button is clicked, it should call a function provided by the FAQ for, for, which is provided by the app dot. Just six and that function should be responsible for deleting this, deleting the particular question and answer. Does it make sense what I am saying like anybody or maybe I will explain it once more. So what do we want is like, when we are calling this button, this on delete function is called right. So now, let's assume that we have the index. So now we know that this delete has been called on the first item. Right. So in function now we have this index index index equals to one. So now we have the index like, we know that like we clicked on the first position but we can't delete it. So, we have to delete this function here by ourself because this question is coming from the app dot jseq. So, after should have a method or should have a function which is going to delete this function. Right. And this function should be provided to the FAQ item. So let me show you like how it's work. So, you like then you will understand. So we also need a on delete as a property. Right. And what we will do will just call props dot on delete with props dot index. Right. So what we are doing here, like when we are clicking on this delete button, this function is getting fired. And this function calling a on delete handler, which is passed as a property from the app dot jseq we are going to fill it. And you can see that like here we have to pass index and on delete. Right. So may also this will be the properties like index on the on delete from this point of view can see that like we are already getting. So this is the FAQ item and this is the index and this is the on delete and what we are doing is like we are passing the on delete like we are calling the on delete which is passed by the app dot jseq with the index which is passed by the app dot jseq. So let's write on delete on to the app dot jseq. So that will be our third task, I think, like let me show you. So we written the handler and so this is the thing like now we are ready to change the app component to add a dummy on delete handler add the on delete handler to the app component which logs the index and okay so I maybe just go ahead but let's see. So this is the FAQ item and we are calling with just the props dot on delete it is and the passing the index so go to the app dot jseq. We'll just define a function called constant on delete. Which needs index. And will for the time being, we'll just log the index. And here into the properties will pass index equals to index. And on delete. Okay, so now you can see that. Now this FAQ item has two new properties. One properties is index and the second properties is on delete this on delete keep keeping the reference of this on delete function right. This index is the coming from the map function map function has three parameters the first is the item the second is the index and third will be the array itself like if you access then FAQ list should be there. Okay, so now FAQ item are receiving the index properties and also the delete property so what happening when we are clicking on the delete when you are clicking on the delete this on delete function gets called this on delete. And what is it's doing is calling this function. This thing that props dot on delete. It's calling this function, because it's gets provided by this right. And it's also getting an argument, which is called props dot index. So we are also getting there this index thing and we are just doing the console log of the index. So let's see whether we are able to able to see the console of the index or not so we go to the console with glitter it. Then you click on the delete. Okay, so. Okay, so now you click on this and you get we got zero and when you click it's we get one zero one zero one. Are you able to do the same what I am able to do, like when you are clicking on the delete we are able to get this console. Did you understand that the flow by what's happening when we are doing all this thing, like when we are clicking on the delete. Okay, so. Okay, so. Okay. Okay. Can someone else also write like yes no or anything. So that I know that like, we are able to do. Okay. So, let me explain you once more time because it is an important concept and you should know that otherwise you're going to face lots of problems. Okay, so what we are doing is like this effect you list it, it is present in the app component. Right, it's done for deleting any item from this state, we can't do it from its children like this FAQ item component. So what we have to do, we have to create a on delete function into that component right. And we have to call this on delete function from the FAQ item. So what how we can do that. So we can pass the on delete properties referencing this on delete function. We also want to also want an index like which delete has been clicked. And for that, we need an index properties. So when we are clicking like we're saying that hey, this is the first one this is the second. So I passed it to properties index and on delete. I added a delete button and when on delete got got called just call the on delete function because we want to delete a FAQ item from the app. And we just passed the prop dot index which determine like which props we are like which delete we will which delete we click. Now, we have to delete this from the state, right, we want to delete the question or the answer like the whole thing. Once we clicked on to that FAQ item. So how we can do it. We'll just create a FAQ and we are going to copy the FAQ list. Okay, so what is here we are doing is this is the spread cure. So this is the Eric creation with the spread. So this is the area and what we want is like, hey, give me all the things which effect you list has so what will do this is spread operator will do it will just make a copy of this and put it here like and it is get assigned to the FAQ. Does it make sense, I think so. And then what do you want, I we want to delete the object which is for gent at that index and how we can do that in put this JavaScript. We can use an error method called a splice and a splice text, the index and the value which you want to delete. So let me show you like FAQ dot splice. And here we want to put from which index we want to delete the item. So I want the item to be deleted from the index and how many items we want. How you want to delete it. I only want one item to get deleted. And I save it so what will be happen. This FAQ has a copy of this ticket. This array and when you pass an index let me pass you index zero so what will happen. It will go to that position and remove and delete it. And delete this thing and the resulting array will be stored to the FAQ. Right, so it just delete this thing. If you pass this thing. Right, so what do you want, we want the particular index to be get deleted. And we also want this FAQ list to be set to that particular list. So set FAQ list, FAQ. Okay, so does it make sense like what I have done. I just make a copy of this FAQ list. I deleted an object which is present in this particular index. And then I setting the FAQ list with that particular array which comes after deleting that object. So does it make sense like what I am just spoken. Yes, but. Okay, so why copy the list. Right, so why copying the list. So, whenever you see the react. Like wherever everyone is writing everyone will tell you that do not mutate the array or the object. Okay, so what will do what is the real deal with this object and the array as you know that array and object point to the reference so once you have this FAQ list what does it means FAQ list is a variable. Which points to an object which is created like it should be stored into a somewhere in a memory and the memory has these two properties. Right, these two properties right. And if you do not make a copy here. And you just deleted it right like you can just make a FAQ list. So, you do not make a copy of this and you deleted it and you deleted and you mutate the original array of that particular thing. Right, so if you mutated this so wherever this FAQ list is using or this FAQ list is getting access. It's also get modified so let me show you. We have this app.jsx and we have one component called FAQ item. Right, and we also have an auto r or tar component. Okay, okay, and it's also getting a FAQ list. Like, let's say like it says FAQ list. It is just an example. So what will happen is this, like if you modify this thing earlier then wherever this FAQ list is there, it gets modified. Right, because it accessing the same object. It's present into a particular index area or in a heap or something there like in the memory and well and when you modify it, the real object, you get those things. And after some time, you also not able to know that like which step you have done it before. So you always make a copy of the new list and the list object. Yeah, so yeah basically like dealing with that yes no index reader or other defaulted point. I look after space that some code in the chat I think if you don't want to copy the array this should also work right if you just put everything directly into the set FAQ list thing without defining that a FAQ variable first and just doing everything at once that should also work fine. Yeah, yeah, yeah. The last FAQ is not entirely necessary but coat looks also just looks bad that way. No, no, no, no, no, you have to do this, you have to do this otherwise you will be having a problem into the bigger app, because it is the same thing like if you not to make a making a copy it will modify the original it is like, let me show you in the console maybe. Let me show you like, constant, aloe, and we have two object a aloe, and aloe be checker. Right. This is the object so if you do aloe dot a you get the aloe right, you get the aloe a right and you define another thing cause be equals to aloe right you do you do this thing. Now I will do this aloe. I think I can do this. The delete aloe dot a true and then I'll just be dot a Now we do not have any a property also into the be. So what is happening there. You see, when you have this type of things and let's assume that you have the same thing and in the author or in somewhere else into the component you are doing the same thing. And once you deleted this effect you list, which is doing this thing. And you deleted one thing and it will break the another component which is using the same effect you list. Okay. I get this is this is the problem like when you do not make a copy. Let me show you one thing like constant aloe equals to let me copy the same thing. Hello. Okay, and then I define cost be equals to. And this is the array of instruction. What I am do is like here I made the copy of the aloe and then I am deleting delete aloe dot a group and now I am wanted to be dot a to see, I'm getting be dot a aloe. You are able to see that. Let's continue. Yeah, that's the best. That is why we are making the making the copy of it. So that we get a point or or anything. I'm so off the opinion that this is probably not necessary because splice is a non mutating. Yeah, and that's why and that's why in the act of you always tell that please do not mutate the array and object always create a new object and the new array and set it. Otherwise, like if you deleted the effect you list like you modified it right and do not think it is from the app. You can just think it off like a marketplace or maybe in the profile or in the sidewalk and you modify the something which is present way above and you deleted it. Then all the other component which depends on the same props will break because they do not find be dot a right but you once you make a copy we already have that. Okay, so I think that's why now you understand like why we are making the copy and please remember always if you are working in react, please make a new copy otherwise we are going to be have a bug which will not able to detect it like into the like just seeing with your naked eye. And also like I will tell you like I can give you the thing but you can see. Okay, then in case on the line 20 should be FAQ dot slide is why not because what we just discussed. I just retain for this. And you click, you see, like we deleted it. You see like I'm deleting this thing why does blown need foundation. Okay, they're both are the same so fast. Why does blown need for this and so I'm doing this and so yeah, did you understand like this deleting event and everything. Okay, so do I think that I should move to the next chapter. Okay, so now what we have done like now we have a functionality of the app where we can delete the question. Now we want to add a form to add a question like you only have one question. You only have two questions but we do not want we want to add more and more so how we can do it. So for that we are going to add a form. Okay, so let's let's add the form into our app.jsx. So I'm just going to copy it. Okay, so you have to move this into the table. You can just copy it from the training site. And now we have to form. Everybody up to this point like now we have four form working on your local machine. It just a form like form and also makes you like you put the div and then the ul and then the form like this. We can just go to the this link. Copy it. I pasted a link into the chat. What they are just copy this whole thing. This and put it into the. And we should have this working for. Okay, I think that most of the people have done it. So now, now we can just go to our app and see what you have to do the next so now what we want to do is like now we want to manage the field value in the state. So we have this question and the answer and when you are typing we want it to store somewhere so that we can. So click on the add button it should be added into the state and for that we should have an estate for this input, like what the user is typing into the question and we should have the another state for the, for the answer, answer also. So what's what's, we are going to do is like, we are going to add to new state, one for the question and one for the answer. We are going to update the state of the question and the answer by a own change handler, respectively for the both question and the answer, and we are going to track the track the user which is typing into the state. So we are going to store all the things with the user is typing into the input or the text area into our component memory called a state. And, and maybe I can just explain. Okay, so there are two ways in which you can, in which you can access this input value. The one is using the control value control input and one from the, and one from the uncontrolled input. So you can see like here it is written like add on general to input and text of which will change the value in the state when the input changes. This pattern is called the control input. So what is the control input. So let me explain once I am done. So what do you want like we wanted to a state one for the question and the one for the answer so what we are going to do is like, do is cost. So what we are going to do is like we are going to do a state one for the question. So what we are going to do is like we are going to use the control value control input and we are going to track the track the answer. Answer equals to use a state empty. So what we have done is like we have not to stay the question and the answer and the set question and the same time. And once you put the value attribute to the question. Value to be the question and this is. This is the input this is the next area and value equals to answer. Now we have an empty question and empty answer right just go to that. So whatever you type here, you see, I am typing like I am typing but it's not writing. See, it's not happening. Do you know why, because what we are telling react to do is like hey react. So this input field have this value and this value is coming from the question and the question is what question is the empty string. So whatever you are typing, it's not getting typed. Right. In this way we are controlling the input field using the state and the value property of the input field. So this is the same. Everybody. Okay, now I want to show you another thing. Let me remove this value. I remove this value. I remove this value. Save. We just go to the question. This is my question. Like now I'm able to write now you are able to see this is the answer but it is not happening. It is not happening when you are controlling its value from the value properties and the state. And if you want to access this thing, I can just show you like that will be like a another thing like you can accept is using the ref. You can see into the react like you can use the ref. Ref and you write the ref name and then you can do is like whatever this ref is, then you can do the same thing like ref dot target dot value and you are able to get the thing this is whichever component like whichever input you are put like whatever you type. So that thing is called on control component because you do not controlling the input value you are directly mutating the you are directing access the accessing the dom attribute. So we are not going to do that we are going to write the control component because in long run it provides you accessibility like more advanced feature you can add based on that. So we now have a question and now what do you want to data like when on change on change, like we want to listen like when user is typing. We want to get all on change. Right. So changing that we're going to call on change answer. And we are going to write this to think on change on change question. We are going to write on const on change. Answer. And what what should be happen like when someone type something into this input field, we are going to call this on change question. So what should want and question will do. What should be the behavior of the on change question we want to set the question. The same value which user is typing so how we can do that how we can change the state of the question we can change the state of the question by calling the set question function provided by the user state and how we can access the value which user is typing we can access it by by the event dot target. Value. So, when this on change question is called it should be called by a parameter called event and this is where we are this thing and on event on this e which I introduce it is called event you can also write event event attribute and you can also write event dot target dot value and you can access it we can also see the console of the event. Okay, so let's see. This is a console and when we write something. What is your name you see we are getting what you whenever you are typing we are rendering one thing like what is your name I press the question key. You see we get a console and once you open the console we have this target. Target has this input thing and impulse thing has a So, that way value. It's not target dot. It should be there like you can access it using the target that value I can just console out that thing. Even dot. And I can be present. This is my you see, like this is the event dot target is so that property. Let me find you for you, but you understand it right. So, this is input and input should have this. You see, we go go into the input and input has an attribute attribute has its value and the value attribute has this node value this, and this is why it's care we are getting this. Now you understand that right like how this works. Right. Now you have to write this same for the answer, like just complete this function. I'm giving you one minute. Currently when you type it does not work. Here it is working. What is your name. My name is a look I'm chatting but it's not updating so once you complete it should be working. Are you able to write it like now it's working for seven down so like when you are typing it should be updating to the input with your text. Anybody able to do that. Okay, then you have to just to. Okay, I think. Like how much time is left. I think 45 minutes. So we just do the same for the set answer, get the event and he dot target dot value. Right. Just refresh. What is your name. My name is. Hello, my name is. Right. Right now we want to click on this ad button. Right. And it should add our question and answer into this list. Right. So how we are going to do that for that. We already have a input. Submit, which is a value ad so on form what we are going to do is like on form when the input is based and I'm going to call it on submit function is going to call on submit and on submit is going to call a on submit handler. Okay. And we are going to declare const. On submit. Right. Now what we are going to do it. First of all, we need to access the question. Right. So we need to get the access to the question or what do you want. We also need to get to access to the answer. What we also want we now want to set this. Another object with the same property this question and the answer and it should be added to that. So how we are going to do that. So, how you can do that. First of all, let's do the same thing. FQ. And you can just copy it. FQ list. Right. And then what you can do you can do like now you can do is like FQ. It was true. You can do this thing like you want to add this FQ dot you want it to either you want to push it like you can do it like that. Who's and what you want to push an object and object with the question has the same value and answer. So what will be the this will this is the short form of a short form of setting the properties in the object. Key and object value has the same. Then what you can do is like you just pass the question and answer and JavaScript will do it for you like question should be question. So let me show you all the things. Question. Like this will be now becomes your key and this will become your question thing right like this thing. But if you're key and your value is the same you can just omit this. You can just delete it and JavaScript will do it for you. Question. And then you can set. FQ list to be FQ. Right. But there is I think I will show you right. It will work. It should not work like let me see. It should work but there is one another problem. Okay, keep keep attention, like when I'm clicking. Okay, so something gets something that's wrong but you can see that like when I click on the. This add function you see this is getting loaded this thing. That's why we are. So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, me four territory remaining What is your name? My name is Alokumar. You add it. And you see, now we do not have anything. And when you click, he also shows you answer. Does it make sense? Are you able to do that? Okay, so I want to show you one thing like before doing this all thing. You can also do this thing like state, you can also add an effect list and you pass an edit with effect list and you pass an object with question, answer, and it will also work. What is the answer? Okay, something has changed. What is your name? My name is Alokumar. You know something I made wrong because I removed that thing. E dot p, hand, default. Okay, so now we reload. What is your name? My name is Alokumar. You see, it's working. So you can also do this. You can also write this in one line and you can do that. So it's just a JavaScript. You can do anything you want. Do you understand that? Are you able to do the same thing which I am able to do? Are we on the same level? Are you able to do that? I think so. Okay, now we will go to our sharing. So now we are going to... So this is the thing which we wanted to achieve, like we want to add the form. We did that. Then we wanted to manage the field value in the state. So we added the question. We now have the answer, the question. We also wanted submit handler so that we can submit it. So we also have the submit handler. You can go with your own slide form and submit. So yeah. So. Now we have these two reactants. Now we can add name. Now we can... And we can delete it. But we also wanted, like you created, added it. But what is your name? It does not make sense. So I want to delete it also. Like when I... There should be an edit button once I click. It should be editable. Whether I will edit my question and also answer. And this will be our next chapter. Like now we have to add two modes for the FAQ item. So one, if we are in the edit mode, we'll be able to rewrite the question and the answer. And if we are into the view mode, we'll show the question and the answer like this. And when you click on the edit, we should get a form with the field of what is your name and the answer. And we will able to update it. So how we are going to do that. So first of all, we have to create a state into the FAQ item. So that we can track whether we are in the edit mode or not. So how we are going to do that. So first let us declare that. So const is edit mode. And set is edit mode. Use a state. And the initial value should be also. Right. So what we are saying that if is edit mode is true. First we are setting a state called edit mode based on that we will show us will show the form or whether we'll show us so the view. So if the edit mode is true, then we are going to set the state. So if the edit mode is true, then we are going to see so the form. So how we can do this. Okay. So now we'll do. This is called the react fragment. This thing. This is called the react fragment because if you want two children to be in the react and we do not want to wrap or give us something you have to wrap it with the fragment. Otherwise you have to wrap with the date. So I am going to wrap it with the fragment. Right. So now we have to go into the JavaScript wall because I want to access each edit mode into the JSX. So how we can code that. I will just put the calligraphy. Now we are in the JavaScript world. So now I am accessing. Is it a mode right. And based on the easy date mode whether if it's true, I'm going to show it the form if it's first I'm going to show it. View mode. So how we can do that. So. If it is more is true, we are going to say form if it's not we are going to. Okay, so it's not a factor because we did not. So if you did it more is true we are going to. So the form and if it's not we are going to see this. So this thing. View. So how we can add the form you can we can add the form same as how we can do it for the app. So we are going to. To the previous we can just go with the form. Okay. We go there we put the ally. And we'll just put the form. So we are going to add the form. And we'll just put the form. This is called the turner operator how it's walk is like that. Like. True. If on the left hand side if it's true it's going to so first. And if it falls it is going to say you second. This is the turner operator. Does it make sense like what I have done. Okay. Now let's go to our reactor. So we have our reactor but we do not have a button to. Toggle our own edit. Toggle the edit mode to our fault. So what we are going to do the first. Now we are going to add a button. First of all. Let's give it. We do not. So let's see. So now we are going to add a button. First of all. Let's give it. We do not. So let's see. So now we have two button. It did and when it's clicked we want to toggle the edit mode. Like. I want to toggle this. I am giving you five minutes to do that. Like note five minutes because you do not attend another two minutes. Just what you do is like. Once it did click on this button add on click handler. Create a. Function called on edit and using set is edit mode. Toggle this is edit mode. I am giving only two minutes for that. And you can do it like the like the same way as his answer is done. Okay. So I am going to show you like how I am how I am doing. So this is const on edit. Now I am not going to give you more time because. We are already late. Set edit mode. Is. Great mode. Right and we call this. Button. On. Click. Right. So. Right. So we go there we go to the app and when we click we get this question and answer from right. When you go it's the three places that's why you're getting it. We are going to see that thing right. Everybody able to do that. Like this just give me a confirmation. Now after doing that like after writing the form what we have to do is like we have to create a control form like an ad form and pass on in edit handler to the FAQ item component like we did with the on delete. So let me explain what does it mean. So for updating any question and the answer we can't do it from the FAQ item itself like this component does not know it does not store the FAQ item list so it does not able to update the FAQ item list item list so what we have to do for updating is like we have to pass a call back from the app. From the app.js to the FAQ item so once the FAQ item updated its content it will be able to send its content to the app.js and app.js replaces that question and answer object with the new updated one. Does it make sense. I think so. Like cost. Cost. On edit right I am just. Writing into the app.js like this on edit and we also pass this to them. On edit equals to on edit so what we are doing. Since we are not able to update the FAQ question and answer from the FAQ item we have to create a function into the app.js which handle the updating the question and answer from the FAQ. Right so we created it and we pass it right so now what we have to do in the FAQ item first of all you have to write it into the drop types. On edit. Drop types. Types. Is. Funct. Is. Is required and second. Now we have to confirm convert this. Into the control component and how we can do that first of all is needs a value property right we see that. And it's also. Need a on change. Question. Okay for the answer for having to be a control. We need a value property. And on change. Answer. Right so what should we do. Right so what should we do. Answer. Answer. Answer. Answer. Answer. Answer. Answer. Answer. Like your work, your Mike is like. This I wrote it in the chat. This on change. Yeah I am going to write it right this thing. Yeah you have written on change questions instead of on change is on change question yeah yeah. So, on change, on change. Right? Okay, this is correct. Yes, I think it's the same as, yeah, like on change and on change, right? So, what should be our initial value? The initial value should be our initial question and the answer. Right? So, what would be our initial question? So, we are already getting the question and the answer. So, let's fill it. This is the thing, right? Props. Question. This is our initial value for the input for the question and props.answer. We'll be the initial value for the for the answer and we need to write an on change handler. So, let's write it. But I'm going to keep it on dummy this time on change question. On change answer. Right? And we can do the same on change question. On change answer. We'll get E. We get E. Let's see. So, we are aware of everything. Let's not do that. Okay, so what we have done for making it control, we have added value and the on change, the initial value should be the props.answer and the initial value for the question should be the props.answer. On change, we are calling on change question on change answer, but we are not doing anything. So, we'll do it once we say what does it do. So, what should happen when we click on the edit to see. Now we get a field question and the answer. Right? Now we also want to update it. Now I'm typing but I'm not able to do that because the value is not changing. Where it is. This props.question is not changing. The props.answer is not changing. So, how I can change the props.question and the props.answer. We can change it only if we have an estate. Right? So, how we can have a state. So, we have to declare to a state. So, we will declare. Question. Answer. Answer. And this time. Said we already have said answer. So, we'll go with said. Question. Answer. Right. Give it an empty. An empty. Right. And now we have to pass this. Here. We do not pass the props.question. Okay. We'll pass this. This is answer. Right. So, this is answer. This is. Question. Right. So, now if you go. Now we do not have this thing. The previous state. The question and answer. So, how we can do that. On the mode with said. Question. To be props. Props. Question. Answer. Right. So, once you have now. Answer. To add the date now you have the question and the answer. But still I'm not able to edit it because we haven't written anything on the on chain question and the on chain answer. So how we can do that. We can do that by simply by set. Question. E dot target dot. Set. Question. Answer. Without target dot. Okay. So let it's a date. Let's make sure that we have the question. So let it's a date. Let's make a simple what is your name. Now I am able to edit it. My name is alone. Right. But still we do not have an ad like on submit handler. Right. We do not have a phone on submit handler. And we also do not have this. So for that we are going to add a on submit handler. And first we have to do this thing. We do not want to reload our app. Okay. We do not want to. We want to reload and what we are going to call it like we are going calling to call it like props dot on edit, which is provided and we are going to pass the props. Like which index we are editing the question and the answer. Which we are editing and the answer which we are editing. Right. And we are not calling on submit anywhere so we'll go to form and we'll all submit. Right. Right. A question. A answer. Right. And what do we have to do we now have to edit the. Now we have to edit the app.js to incorporate our value. Okay, so how we can do is like first. We'll copy the FAQ list so that. Then on FAQ.index. We are going to update the question and answer. And now we are setting FAQ list with the FAQ. Did you understand that. I think that. Okay, let me show you. So on edit. I'm going to call. What is your name. My name is. And I press that. And I make some. I think the item. Is still right. Let us see our next song. Let us see. Okay. So what I am missing is what it's it's working. It's setting the effect list. But in effect you item. I am not making it to the view component. So what I have to do is like said is edit mode. I think I am doing something. Okay, so I think I am doing what wrong so I do it this thing. Okay. So it's not any problem. It's just one problem like I mess it into the on submit. Like after submitting, I want to be on edit to be false. And we want to we do not want to see the form, but we want to see the question and the answer. So now I am able to show you. What is and hello to press add. And once you click on you see what is a lot that it makes sense. Or you have any problem please ask. I just miss this. Okay. So this is the huge. Initial like you have to use the initial form data. So two modes for the effect item we already done it. So what we did is like we added is edit mode. We created on edit handler. We created on set. On edit mode, we are seeing the form when they are clicking on the button, we are showing this form. Otherwise, we are not. And then we have to control component. So we can make a question and the answer on the edit. We are setting the edit mode to be true setting the initial question initial answer on changing the answer. We are setting the new answer value the new question value. On save like when you add we are calling the on edit callback which coming from the app component. And this is the app component. Which this is the on edit so we are coping the effect list on the index like what particular index we have written the question and answer we are setting the new question and answer well like new object and we said the effect list and we get the desired result. Does it make sense? Okay. I think that. I like this my audio video all right. Yeah, yeah, yeah. How much time do we have like five minutes. I think it's not whether to dive into redux now. Yeah, so maybe we can wait a little bit. I will just show you the routes. You can try but I mean it's only five minutes. Okay, maybe we can extend another 10 minutes like 15 minutes and then we can do that. Yeah, not sure when when we will be shut down. Sorry one question. Yeah, are we going to continue tomorrow or is it another blog tomorrow. No, no, no, we are going to continue tomorrow but with the Volta one this training. This training, which is the same thing and maybe jack up can go through it. So maybe let me go like all the way and then all the remaining thing will jack up will tell you or maybe just in 15 minutes I will tell you tomorrow. Does it make sense. So let me continue with the readers and then I'll continue with tomorrow with the same thing right right jack up. We can do that. Yeah, we can do so I just talked to Paul we might get another 15 minutes from now on. If that fits you. Okay, okay. Let's let's do that. I can I think I can do that. Okay, so what is the redux. Okay, so redux is used for storing the state like what we are doing with this you just a thing. Like currently we are like currently we are storing our state locally but there are some state which needs to be present globally like the team. Like we want a black team or white team and based on clicking on the moon. Like all the dark team and the white team we want to change all the colors and colors in the background on the foreground. And for that we are going to use the redux. So that form that point of view you want to use the redux so that it is easy to manage all the component to get the data from the store not with the this props thing because now we have only two components we can pass this from the app.jsx but maybe we have a list of component like a team component then you have to pass each cross from the app.jsx to that thing which is not a good way. So we might use redux so that in redux we will store all the initial state and all the action and the reducer will do update the state. So first we are going to install the redux and we are going to install redux and react redux not see and react. So first we are going to install the redux and we are going to install redux and react redux not see and react redux and react redux just for because you can use redux for all the other framework like view view angular everything and react redux is not a good way to do it. So that's the thing which we are going to see so we have now added the react and redux in our app and and for mutating the redux store we need to create an action so whenever you want to create whenever you want to update our system we need to pass a function called action and this action will update our react like update our state so let's see what happens when we are going to create a redux store. So let's create an action folder in our source so let's create actions and in actions will create the index.js right and in and the first action will be the this thing like we want to add the action folder in our source. This thing question answer and add to be wanting to create a we want to dispatch an action which updates the store like hey I want to add the effect with the question and the answer so this is the thing émon tare ются fierce Raw Maya қכשיו cortex қ конц喔 lightly қ қ illustrates қ vicinity қ entsprechend to TF Cincinn sugg go to pack canción song song song preço song ut song fungsa. It just have your state and your action just go to the reducer and tells that hey this is the new actions which you just done. So what does the reducer do is like it takes the action, update the store and returns you the new state. So we are going to create a new reducer. Why reducers and reducers and in reducers we are going to create FAQ.js file and in FAQ.js file we will do this simple thing right now a reducer will have your state and the actions right. So now we have to write an action like write a reducer for handling the add edit and the delete item action and we can simply do that by this thing. Let me I will explain it. So what does it does is like this. Hey FAQ is a function and function has this state and the action okay and the state the initial value of state is an empty array and based on action like based on the add FAQ item we want to make a copy of an array. We are doing it like we are returning a new array with a copy and we are amending a object like we are appending an object. For edit we are coping the state we are doing the same thing which you are doing into the app.js and returning the FAQ and for the deleting we are splicing the at the same index and we are returning the FAQ and this is the default like whenever the nothing action passed we returns the FAQ.js file. So this is the reducer and now since this is the reducer for only one component if we have another component like the login like where we want to store the user is login authenticated or not where we want to have to create two function log out and log in and based on that we can create another reducer for login and he will handle those things and for combining those reducers you want an indirect combined reducer from the FAQ like from the redux and we import the reducer the FAQ reducer and we'll just export the export the FAQ using the combined reducer function. So reducer has an index.js file and it will it will just do this thing it will just return the all the combined reducer which will create like you can create another login component another another reducer like the same as the FAQ another login and everything and you can just import it and you put it into the combined reducer and combine all reducer and all the state at the one place. So writing the test for your reducer since your reducer is a pure component it just need a state and the accent and based on that it will return you the new state and since it's a pure function we can write the test for that deck. So now we are going to write the test for the add FAQ item so we'll just create the FAQ test.js and we are going to put this thing we haven't done anything we are just writing a test for the thing like we just have an FAQ we are calling an FAQ with an empty like the initial state is an array and we are passing an accent like this is the accent and the accent should return me a state with a question and the answer I saved it and when I run np and run test now we have two tests one is failed and one is passed because we have to update it the previous one okay so you now see it like two passed two passed like all the test has been passed there but value is undefined okay nothing nothing serious because now we are writing this for the FAQ right so this is the test of the reducer like where we are just setting we are just calling a FAQ function which is there and we are just passing initial state and accent and based on that we are able to see that it will return an array with an object with question and the answer this is what we are doing like where if you get the I did we are just having an interstate so interstate and we just return this thing I think it makes sense from what I am telling you so for now like now we want to wiring wire the store so that we can manipulate our store using the accent from our app.js for that we are not going to make a make a like we are not going to amend the app.js but we are going to create another component called FAQ and we are going to copy all the things which is in the app.js and put it there so let's create a FAQ in the component I'm going to create FAQ.js and I went to app.js copy everything and I put everything into the FAQ.js and instead of app I will call it FAQ FAQ I haven't changed anything anything at all right like for time to connect FAQ and we'll copy the exact copy of app.js you can see there and in the app.js we are going to remove all the things and we'll put this thing okay so what we have done like we have refactored the app.js and put everything into the FAQ and into the app.js what we have done like we have created a store from the readers and we passed the root reducer and root reducer is going to reducer and reducer has index.js this thing and it passed our FAQ reducer and React Redux provides you a provider we wrap our FAQ for the provider and pass the store to the store so now that everything is wired up properly like we have this store and this store is configured with the FAQ now so let's go to our react app okay may be this thing clear npm start okay so FAQ do not have app.css so we can remove it okay so we need to fix it it's from the FAQ item okay so we have now all the things working as previously like if i add what is your name my name is something is working if you delete like if i delete this work if i did hello we are just here we just refactor it and wired up the store now we are going to uh now we are like hello we're really running out of time maybe if we do the most of the redux part again tomorrow in a bit more relaxed environment where you can show stuff in a little bit more detail that might be a better idea because now wrapping that up in three minutes i think is not really worth it i think i can give you an hour or so of my training time to know no problem so we can do that a little bit less hastily would that be fine with you yeah i also think so okay because well now people are now around here for over four hours and also i think their attention span might be a little bit overstretched so um i think we should do it that way yeah okay then um runs out of brains error yeah yeah but uh from this point of view you must have this understanding that like we created an accent right it's nothing something like yeah maybe just redo this part tomorrow a look it's it's a complex topic exactly as as marcus this wrote it's not something you should rush over in like 15 minutes um what i'd say is we wrap it up for today um and thank you all for attending and um i hope you will be back um tomorrow i'll wait a sec i'll show myself again so um for for the next part of our training where our a look will continue with the redox part we now had a glimpse on today and um then the next part will be the introduction on how to actually use the knowledge you gained today uh to work with valto which i'm really looking forward to to show to you guys um anything from from your side a lot i do not think so but yeah i'll grab sometime from you tomorrow so that i can see you yeah no problem then see you tomorrow guys see you tomorrow bye see you tomorrow thank you bye
Trainers: Alok Kumar & Jakob Kahl Learn how to create your own website based on Volto.
10.5446/13814 (DOI)
Welcome back to track one at the Plone Conference 2021, day one. And with me is Piero Niccoli, who is a front end developer, who's also been the organizer of the Plone Conference in Ferrera. And he is going to be talking today about Io Comune, the solution for Italian municipalities. Please go ahead, Piero. Thank you, Kim. And hi everybody. So yeah, now we will talk about a solution that we built for Italian municipalities and public administrations. So, as Kim said, I work at Retal, I'm also part of the Volto team. And for any questions in the next days or whenever you want, you can find me on Twitter and on GitHub. And, but yeah, I said, we built a solution, but a solution to what? Why would the public administration need a single solution that works for every specific municipality website or public administration website. Well, some of you have heard this already, but to be to keep everyone on the same page. Let me tell you the story in a short version. So some years ago, the Italian government built the digital transformation team. And the team's goal was to, well, it's to change how people perceive online services and websites for public administrations. And so a while ago, people needed to know where to look for the right service. What was the right office to go in order to do something and that they need to do. And what's the correct website for some for some other services. And actually, the digital transformation team built these guidelines that tell us that we need to build our websites with the with the focus on the on the user. So user centered. And so that the information that they can find that is readily available for them instead of taking long searches to find them. And these guidelines that they built have common rules for content types and for the content restructure of websites. And they also define a whole design system. And we also need to have scales, because we need many sites, and many themes, and we also need to allow few customizations here and here and there because they define the design system for all the websites of the public administration. And we also need to allow each client to bring their own visual identity so their own colors and their own logos. So, actually, the guidelines I'm talking about our version number two we already had a solution and Nicole and I talked about it at Plonkopf 2019. And in 2008 2019, though, a new version of these guidelines, the version number two was published, and it actually changed the entire design. So we had to implement a new thing. And we decided to do that from scratch, more or less, and we chose to adopt the Volta for that. And this is how it's supposed to look like with our custom logo. And but that's the design that they, they implemented. Well that's an implementation that they suggest for their design system. So, what are our goals this time so as I said we need reusable and customizable things. And, but also with this time we wanted to use the bootstrap based kit that the digital transformation team has prepared. We prepared an extension of bootstrap for and build their design system on that so we wanted to use that. And we wanted to do that, because that's one of the requirements in order to get our theme as the official open source plan theme for public administration. So there's a list and we want to be in that list for every CMS there's a list. And so, also, one of the things that the community has built upon this kit is a react implementation of bootstrap for with this kit so it seems logical to just use that and build our theme on top of that. But how do we build such products so we, we decided to build a common base theme and to build the specific sites for our clients with the dependency on these common base thing. And we actually built that already, and during 2020 and it's available open source at github.com slash retoro slash design dash Volto dash theme. And so you can check it out and I mean, play with it for kit whatever you want. And so the, the idea was, okay, we can build a theme, the base theme for kit and for each client. And, and we're good. But actually, we want to build something more than that. So, this is an example of how it looks like the already built one. So actually we want to build more than that. And what we did was already discussed last year actually at long come 2020 by Nicola. And he went into several implementation details and how to replicate this kind of product. And he talked about it in the YouTube link that's, that's over there on the slide and also you can find his lights on over at slides dot com. But to summarize that what we did was to, we wanted a base team, and we wanted to be able to upgrade that base team and have specific client sites, depend on that in order to upgrade them all together. So we in in the in the repository I linked earlier, we replicated the razzle configuration of Volto to allow for a third layer so we have Volto we have our own base Volto and specific Volto repositories for every for every site. And of course built a second Yeoman generator in order to create all of these third layers. And we're done, almost done. So, yeah, this is what we did. We're almost done. And because there's this small issue that we dealt with since last last long conference that is deployment. Why is deployment is an issue, because we're not deploying one site, but we're deploying many several. So deploying Volto is slightly more complicated than deploying classic loan, not scary not too difficult, it just takes slightly more, a little more configuration, a couple of tools, extra tools, and stuff like that. And also, baseline is your clone Volto on the production server and build it. And that's, yeah, when you have many on the same servers. I mean, we didn't like that a lot. So we wanted to find the another way to do that. And also when you deploy you need to add the more configuration because you have proxies, maybe more balancers need to be worried about caching and stuff. So we need to make this easily repeatable. And since we have, I mean, all of these deployments are identical because the sites are like the same site. So we want to also want also want to upgrade sites. So how do we do that. Okay, some of you know me, some of you don't, but I get excited when I start tinkering with new stuff. And every time I tell the story. I tell you about. And so what do we do with Docker all the things right. So that's how we make things repeatable. Right. It's not that doctors new technology but so we went a little deeper than just Docker. Because we have these repeatability goals that I told you about because we have the ones in production and the ones in development we have around 50 of these sites at this time. And many more to come. And so we started testing orchestrating our containers with Kubernetes. And also we started building our images on CI and everything that needs to be built and configured is handled by those tools. So those two tools. So we build everything and on the production servers, we run containers and that's all the production servers will need to do. So we actually are working progress thing about deployment. We have some of these sites in production. None of these still runs on Docker yet. But we have deployed some that should that will go live in the next few months. So we're starting with these Docker and Kubernetes things. So we will keep working on this. And yeah, of course, Docker allows you to avoid building Volto on the production server. So a few examples of these sites to show you why I said they're all the same. Of course, there are guidelines. And these guidelines are strongly encouraged and strongly pushed by this digital transformation team. So actually, almost every public administration who wants to build a new site wants to do the wants to do it that way. So this is the very first one that we we pushed to production, almost a year ago, I think, or less. And then there are others. And then we build more and some more and some more and even more. And yeah, you can see that there's a baseline. And among all of these and some visual identity because we want to have, I mean, the idea is, there are guidelines, and you as a municipality or whatever want to have your visual identity. But the baseline is the same. And so what's there for the future. Of course, the future means two things improving on like adding new features, and also working on on issues that you have because everybody has issues here and there. So what are our issues. So we said, we dealt with the, we dealt with the problem of upgrading and maintaining the code with by having a single base theme that we depend on in all of our specific client sites. So the main code is the same for every site but there's one thing that's not dealt with by by that configuration that is the configuration files, because having a separate repository for each site means that you have a separate docker file, a separate CI configuration, separate package Jason with some dependencies and other things and so on. So if I change how I want to build those images I have to go to 50 repositories and change the docker type. So that's not good. That doesn't scale it up. So, how do we fix these and what other improvements. Are we working on so. The thing is, we upgrade the repositories, the fifth, let's say 50 to make it simple, our 50 repositories. All together so when we push updates on the main theme, we trigger some CI things that upgrade dependencies on all the, all the sites and build new docker images and stuff like that. There's one thing that we are thinking about and experimenting with in newer products that might help with the configuration files maintenance, which is using add-ons and creating like moving the base theme to be just an add-on instead of a whole vault repository. And this was actually an easy one to think of. I mean, it would be today but when we built these base theme in the first like a year and a half ago add-ons were not a thing. So there were no add-ons for vault or there was no way to build add-ons for vault yet. Of course building repeatable themes with add-ons is useful because you have one repository and one base add-on and our idea would be to have separate add-ons for each site. So you keep all the configuration, deployment configuration in one single place. Of course, these will cause more issues because we will need to separate our builds and make separate builds with the same repository based on which client site we want to build because if we build an add-on for each one, then we will need to find a way to not include all the add-ons in every build. But we're working on it. We have ideas and we have ways to deal with it. So yeah, as I said, if you're curious about how we built an extendable Volto theme and how we work on that, check out the link that was in the previous slide and I forgot to put here because I didn't think about it but it would be useful. So github.com slash redturtle slash design dash Volto theme and everything that makes it work is there and any future work on it will be there. And yeah, thank you for attention and thank you for listening. Piero, thank you. Thank you for that great presentation. I have been wondering how you could manage to keep so many repos and projects updated like that. So I'm going to be checking out this repo. You should please put the link in the Slack track one and that'd be great for others to see as well. But I hope you'll join me in thanking Piero and he's going to be in the Jitsi meeting, which will be right after this and the button for the Jitsi meeting, of course, is below the video frame here in Loud Swarm. So thank you again, Piero. And yeah, don't be too happy being in Sorrento, okay?
In the past couple of years we have been working on developing a batteries-included solution for Italian municipalities based on the latest Plone 6 frontend technology. It offers repeatable deployments of Plone sites that are adherent to current regulations for municipality websites, while allowing the needed graphical customizations. And its core is open source. We will take a look at how it's made and how it joins forces with the community towards its goals.
10.5446/13840 (DOI)
Hi everyone, welcome to the second talk of track one today. Joining us live from Sorrento is Eric Reholt. He's been around the Plum community for many years and more recently has been developing on guillatina and working on front end development. So you kind of went the opposite way of me, you know, I started on front end, moved to back end, and you went started with back end and now working on front end. So Eric today is going to talk about ABFAB, talking more about front end development. So go ahead Eric. Thank you Chrissy. Thank you very much. Thank you. So last year I've used a t-shirt to display my slides, but that's 2020 now, 21, sorry. So not good anymore, not good enough. As demonstrated by Gidebore, this is the society of the spectacle, right? So we are permanently pressured by mass media, by social media. We make ourselves slaves of our online reputation, constantly pushing back the limit of nonsense and insanity. That's a sad word. And here I am for a miserable new attempt. This year, for what it's worth, my slides will be t-shirt. So about me, I started playing with Plon in 2006 and it led me to to Python. Yeah. And well, I got involved in the community. It was nice to meet a lot of people and I got involved and contribute more and more. And most of my contribution was on on Plon 5. Oops, that side. Plon 5, yes. And you can tell that it was a long time ago because we are now approaching Plon 6. That's the official t-shirt, right? I'm not contributing much in Plon. I moved to Fronten. I moved to Fronten and that's very nice. And Fronten, well, didn't move me away from the community. I'm still participating a lot. But it led me from Plon to Guillotine. And while Guillotine brought me to Ona. So Ona, just to give you a few explanations, it's a platform which allows to index tons of information into Guillotine and then search this information. So it's meant to be used by companies, large companies. And just for you to know, we are hiring. We are hiring in the US and we are hiring in Europe. Yeah. I don't know. You watch TV? I love Dess and Robots. It's a good show. Robots. No, yeah. I'm not sure anymore about this t-shirt slide thing. It's a bit limiting, probably. So, no, let's stop it here. I'm just making myself ridiculous, no? Forget about that. So coming back to my last year talk, I was talking about what I think about the current state of front end, criticizing the SBA pattern, which is well used everywhere, but also bring a lot of problems. We have huge bundles. We are over-engineering the full thing. We are kind of breaking the original web model because we are mixing what is the browser layer and the content layer. So I explained that last year, but I had nothing done, nothing concrete to provide. But that was my starting point. So then I tried to think about it and think it over. And, well, focus on things I don't like. I don't like in PM install. I don't like over-complicated JavaScript craziness. And I don't like endless CI builds. Okay. Then the things I like, like hot couture, champagne, HTML. That's a good mix. But what can we do from here? What can we do? Well, app fab, app fab. Yes, short name for absolutely fabulous. And here I come again with a t-shirt thing. So, yeah, I mean, when I start a project, I mean, I start a project. When I pick a name before any writing in line of code, when I pick a name, first I buy the domain name and then I buy a t-shirt. Don't we all, right? So what is app fab? App fab refers to absolutely fabulous. A fantastic English TV show in the 90s. Really perfect thing. And I think it perfectly impersonates the core values of what I want app fab to be. Wacky, wild and fun. Things we all need a lot. And, moreover, British humorists have always been an inspiration for our community. So, yeah, it was a good pick. A good choice, I think. Or is it a Latin reference? Because app fab could be also short name for app fabricant. Meaning, on my way out of the factory, kind of. So, it's a bit a way to say that I try to move away from the regular way we are building stuff in the front end in the front end community. But I'm being pedantic. Let's move on. So, we are always being into trouble with front end. 20 years ago, we were like, yeah, why do I have five different versions of G-query? Then, 10 years ago, we were like, why do I use five different Gs frameworks? And now we are like, why? And dying. So, that's basically where we are. We always create new problems to fix the old one. And, whoever not having the most recent problem is such a big loser. So, I know all these Gs frameworks are fantastic. There is plenty of choice here, of course. But what if I just want something simple? Okay? And, well, if you remember the 90s, it was a good time. Because in the 90s, you have two interesting things. Good British TV shows, for one. And, for two, super easy web technology. And if we focus on this aspect, HTML. HTML is cool. HTML is a true essence of low code. GS is not the web primary language. HTML is. And HTML has been fabulous since day one. It's what makes the web truly accessible for developers. Easy to develop. Fun. That's what I want. Again, HTML. And so, the idea is to say, okay, apps are not content. Right now, we are delivering shipping applications as if they were content. Everybody does that. This is rude. This is wrong. We should not go that way. The web pattern is about delivering content. And the browser is supposed to be the application. So, okay, we are not free to build the browser we want. But we have to think about the way we could fix the pattern we have damaged. So, what about considering components could be content? Components are nice. Components, something kind of new. I mean, not super new. But it's what framework right now use. The component approach is efficient. It's an efficient way to structure an application and efficient way to maintain over time, reuse stuff, et cetera. This is good. Actually, HTML is built that way. You have HTML component to build your DOM. But if we think about the custom application, it's also very nice to use this pattern which has been provided through the SPA pattern. Well, my idea with that fab is that we move that to server side. That means we store the components, front-end components, as contents into our back-end. And they are served dynamically to the front-end. So, how does it go? Imagine you have a server, let's say, Gliotina. It could have been Zop, it could have been Plon. But a server able to store a hierarchy. So, the hierarchy strategy is very, I mean, is everywhere on the web model. So, we know it works. My idea is to use it to manage components. So, you're going to have a tree, just like you would have a tree of contents in your website. You have a tree of components. And they have dependency. So, each component has subcomponent, et cetera. That's the file system. File system is a good paradigm. Developers love it. Okay. So, how it goes? You're going to have a call for a content. Let's say it's a poll. So, you will get from the server an HTML page. And this HTML page will contain, will also come with a JSON, the data. It will also provide the components, the main component to render the poll. So, let's say it's a bar chart. Okay. You get your bar chart component. This component is probably using subcomponents, like a button, for example, et cetera. The title, you can name it. I mean, you have subcomponents. All of that is going to be loaded dynamically because we have an initial content which needs it. Okay. Then, second time, you're going to display another poll. Well, you will have a call for this. And you will not get the HTML because that's still front-end. So, that's from the server side. But that behaves like an application. So, we don't need to have the HTML again. We will not reload the page. We'll just get the JSON. Okay. And we will get the new component we need if it happens to be a new one. So, in that case, it's going to be a map chart. All right. Well, the button, we already have it. It's in our cache. So, no need to load it, et cetera. So, we will just need to load what's missing compared to the initial page we were. And every time we're going to navigate, that's going to be like this, only loading the difference. So, it behaves like an app, like a front-end app. And it updates as a website because when you want to change a button component, you just replace this on the back-end. And any page needing it, we'll just load the new version. You don't have to rebuild anything. So, that's something we could call a transitional app. This name comes from Rich Harris, who is the creator of stuff. And I think it's a nice name. I think it works. It is not necessarily using it in the same context as I am, but that's the idea that you should move away from the SPA model and have something which is more adaptive, let's say. So, what do you get? First, you have no bundles. Nothing is bundled. Every single component you're going to create is going to be compiled by its own. And we'll have dynamic dependencies. How does that work? So, let's say you have a component we depend on other components. Well, it just imports it. And when I mean import, I mean import. That's the ESM support, so ESM stand for ECMAScript module. And it's not module federation at all. Module federation is something from Webpack. So, it's still about bundling stuff, but just making sure that your bundle can dynamically get dependencies and plug them dynamically. So, here is just about your browser being able to interpret import blah, blah, import such file. It works. That's not something I build. That's something your browser provides. Okay? So, when I compile a component in Alphabet, it will compile just its own code and everything which is imported will still be imported on the compiler. Compiled version. Okay? So, that's also the idea to do something which is low code. And that's why I choose Svelte. Svelte is super simple. Svelte is a component framework just like React and is super light. And it could be considered in the app fab thing like the templating layer. At the time you were using PLON with Jenga or whatever, you have this templating layer where you can do HTML and put some dynamic information to that. Well, that's exactly what Svelte is going to do for app fab. And, well, I know some people are working on a way to generate HTML on the back end again like HTML is about it. I don't believe in this approach much. I think that front-end technology are good. They are really improving the user experience. They are really improving the web. So, this I want to keep. I want to keep this. I want to keep the component principle. But I just want to make sure that we are not adding extra pain for the developer. My main purpose is developer experience. So, let's see how Svelte works. Well, if you try to do a yellow world in React, it's going to be something like this. It's not long. It's not a lot of code. But still, it's ugly. That's not cool. It's just to do yellow world. You have this kind of stuff. And, well, we are lucky there is no class here. But wait for it. Let's go with the Angular version. Okay. Now we have a decorator, a class, and blah, blah, blah, just to have yellow world. No, no, no. Sorry. But I, it's difficult to digest. Really. I cannot swallow this kind of stuff. Definitely not. So, let's check what Svelte does. Oh, that's HTML. That's HTML. And remember, HTML is cool. It's cool since day one. And that's what I call a proper and sane and reasonable yellow world example. H1, yellow world, H1. I love that. And what's cool is that this is a valid Svelte component. This can be compiled to Svelte. You would say, yeah, no need to compile it. That's HTML. Yes, true. But I mean, it is supported that way. So, the best way to do yellow world with Svelte is like this. So, that's super good. And, well, if you consider what you enjoyed with the Python templating system we had, I don't think that's really Python. That was not Python's, Python's the cool part of it. The cool part of it was HTML. It was nice to do server side templating because you were just creating HTML and putting some dynamic part into it. If loops is kind of tough and that was working. And that's very much what's missing in React and all the traditional framework we have now. We don't have this ability to directly act on the HTML. And I think it's super nice and super fun to be able to. So, let's move on with Svelte. If you need some CSS, well, guess what? You have a style tag. Okay. That's not something totally crazy to imagine, right? If you need, but, yeah, remember, that's still something going to be compiled. This style is going to be encapsulating the component. It will not pollute the rest of your page, et cetera. That's done the proper way. That's done the component way. But that's still very classical HTML. If you need scripts, well, script tag. Here we go. So, in that case, here we see that Svelte is also offering an actual templating layer because I create a let me variable and I inject it in my template with curly bracket me, which is kind of easy to understand. Good. And the good thing is it actually behaves as components. That means I can create something else, another component where I'm going to import my ELO's VELT component and display it. So, that's this thing we didn't add it with server side templating. We add macro stuff like this, but it was not backed with an actual component approach. So, that's a real component approach. So, we have everything all together. The good from the front end and the good from the HTML approach. Plus, it is reactive. Okay. So, reactivity is that something we care a lot about, of course. In, Svelte is extremely easy. And the way to explain it is incredibly simple. You just think about an Excel spreadsheet. In an Excel spreadsheet, when you put the formula with equal such cell twice two, it's going to be twice the value from this cell, whatever happens, whatever you do, you change something, something else, etc. This will be always, always true. Well, the way to put that with Svelte is to prefix a line or a block or a function or whatever with $. With $, it means this will be always true. That's what reactive programming is. No need to go into super complex explanation and concept and everything. Reactive programming is about to be able to write a line once and it will be always true. You do it that way. So, now we have those VELS components and I'll explain you how it works. What is Fabi's doing? Basically, every time you have a component, it is stored on GeoTina with its own source. So, the kind of stuff I've been shown. And also, is compiled version. So, the compiled version is always saved with the source altogether. And you can also create content because, I mean, GeoTina is super capable. You can create objects. You can create many, many things into that. Well, you can say this content in GeoTina will be associated with this component. It's going to be its view. All right? And the server, the FAP server, is smart enough to understand when we ask for such or such resource to serve it as Gison, as HTML, if that's the first page we are loading, or as JavaScript, if that's a JavaScript we want to run. So, all of that is going to be transparent. You just call for something and it will render it the proper way. If you already are into the navigation, it's already started, you will never get HTML again. You just get JavaScript and Gison. And all of that, of course, generates tons of requests. I'm aware. I don't care. It's maybe a bad approach. I'm not guaranteeing it's not. But it works. Of course, it works better with HTTP2. So, the default setup you get with FAP is obviously enabling HTTP2 and HTTPS, manually. And yes, you have tons of requests, but they are all super small because that's no bundles. Just super simple components, like you load a button. It's going to be loaded once because everything is cached. And then you are going to load the checkbox and everything. But at the end, it's not a lot of data. It's just a lot of requests. And through HTTP2 is pretty same. And we are using Eutina for what it does very, very well because that's a great API. You get tons of features for free. It's secure. It's super good at storing a bunch of small items, like in a tree, exactly what I need. And it's fantastic at handling bunch of sil-mechanious requests. So, that's exactly the thing I need. And so, basically, APFAB is turning Giotina into a front-end component storage system. That's what it is. It's like it's a super cool and fast and dynamic web file tree. You can see it like this. Now, my objective with APFAB is to make the developer experience as smooth and pleasant as possible. And I don't mean super high-ranked developer, super, super star or whatever. I mean, anyone. Anyone able to do HTML should be able to do APFAB stuff. So, when it comes to APFAB, it's a server, right? You have Giotina. So, it implies you have also Postgres. You have Python into that. And you have also, I mean, many things like this. And, yeah, kind of explanation. Like, you just install Postgre, 9. Well, 10 might be supported. Well, you'll see. And then, people install it always, always fail at the end. And then, oh, on Windows, I'm not sure. Never tried. So, not a good thing. My way to make it simple is Docker. So, with APFAB, everything is deployed with Docker, only Docker, just Docker. That's either locally for your own local server than for the pro server. Everything is on Docker. And low build. So, we've been through low code, low deployment, low build. Low build? Yeah. I don't want NPM install. I don't want webpack. Okay. But at some point, you need something, right? How do you do that? My first, my, my, my, my, the thing pushed me to move with APFAB was when I discovered that a calling of mine, so that's a UX guy in my company. His name starts with A. He's Catalan. I might know him. He's been involved in many things in the plan. He's a UX guy. I mean, yeah, a bit grumpy. You see the guy. And I saw him. He was modifying CSS and HTML through the GitHub web interface. You know, on GitHub, you can edit something and use change, and then it makes a commit. And then you see, are you going to start, right? So, he was doing that on a project in the company, and then waiting for 30 minutes until the full thing is compiled and deployed through Jenkins and all the blah, blah. And I was like, why, why, why, why, why do you do it like this? Are you, are you stupid? And to be honest, it was kind of a rhetorical question, right? I had a pretty good idea of the answer I had for this question. And it told me, it told me, yeah, but when I tried to deploy NPM install and it does not work. And when I have to reuse it like two months later, it's broken and painful. So, yeah, it's too painful. Too painful to have a local build. When you're a front developer, you have to do that. You have to fight this every day. It's going to make your fans go crazy on your laptop. That's not good. So, how do we go? How do we build something on a same way without this? I add an ID through the web. Yeah. So, why through the web again? And again, and again, and again. I've been there, right? I've been through many, many different aspects of that. I don't know. I think I am attracted. The idea that the program is able to change itself. That's probably the best way to shoot yourself in foot, right? Probably. But I love the idea. I remember the first C program I did. It was just deleting itself. And the first time I implemented it, I did it wrong because I made it remove the C file, so the source. And then I was fact I had to record it. And then I changed it so it removes the executable. But so that's stupid, probably, but I was fascinated. This ability to change yourself as a system. And yeah, I'm 47, an age where you are supposed to accept who you are. I'm Eric. I like through the web sheets. And this time, it comes with a different flavor. So it's not like Plomino where you have a huge UI with a lot of fields where you enter stuff and options to select, etc. It's only code. So code only. It's not like Rapido because here the focus is on simplicity. It should be easy and accessible for everyone. Rapido was not because the other, right? The other, XSLT, XML, that does not work. Failure. So here is just through the web way with with app fab. So I had prepared some video but live demo. Let's go. Let's try live demo. So this is app fab on page. So yeah, I like a small icon on the top corner. I think it's classy. So yeah, that's app fab. So you get two logs, of course. And then you enter the online interface. So here you have a file tree where you can create new components, new projects, whatever you want. So let's go. I'm going to go on the root and create a folder. Name it test. And in there, I'm going to create a component. So it's going to be a load of that. So as the extension is that it's going to be compiled whenever I need. So let's start with my initial example. I saved and I can preview it. All right. Good. So now let's make it a bit dynamic for fun. Oh, no. That's live demo stuff, right? You get everything wrong. So let me equal eddy. And I'm going to say hello. And save. Oh, wrong. Of course. I use you instead of me. Well, you see the compiler, the compiler is smart enough. So all of that is compiled into the browser. So the compiler is super small and has no dependencies. So I can run that on my browser. It's like nothing. And yeah, it's super nice. You're going to give you warnings about some mistakes you might do. Like here, the variable is not wrong. If I create a variable which is used anywhere, you're going to tell me if I make accessibility fault, like no alt on an image, you're going to tell me as well. If I have a class which is not used, a CSS class which is not used, it tells me too. So it's super nice. Okay. So now it's fixed. I save again. And I get my component working. Okay. So that's nice. Let's go with other examples. So I will not live code everything. That's probably too risky. Let's go with this simple example. Here I have a component which is creating SVG. SVG is always painful to generate. And here, when you have just a pure template approach, it's actually not that bad. You create SVG tag. And here you have each loop which goes across the content cycle. So content is what Abfab push as a context, let's say. When you try to preview a content, it will be available for the component under content. Simple. And in my content, I'm supposed to have circles and I'm supposed to have a color as well. Well, I create as many circles as I have in my content. And they have different size and different location and the same color. So I want this component to be applied to a content like this. So this is my data. I have the color defined and the list of circles with different values and position. Well, the way it's associated is on the folder. I'm going to say the view view, so the default view, going to be circles.svout. Okay, great. So now, sorry, I'm going to close anything. Here I go. Okay, it generates my small SVG circle. So nothing crazy, but what's good about it is I can, so, yeah, of course, I can change whatever I want. If I change the color here and I say, we're going to render with the right color. I'm color blind, so I don't give a shit. But anyway, what is cool about it is it's publicable. So I can, of course, access this in his own page if I want. So this is my page, my component. That's dynamic. And I can also embed it into another web page. It can be done with other web components. It means I'm going to import a script, which is going to create dynamically my own custom element, which is going to be app.element. And I provide the content pass for my data. So that's a snippet of code. I can pass. I put that wherever I want. Or if I have security restriction with scripts, I can use an iframe. And that's exactly what I did on this GitHub page. So here on this GitHub page, I have the two examples, the one with the iframe. Well, it's not necessarily perfect regarding accessibility and usability, et cetera, but sometimes you don't have a choice. And it can be also custom element. In that case, it's directly into the page, just like the rest. And it's going to be themed with the rest, et cetera. So that's something super easy to copy pass, right? Just like you do with a YouTube video. You copy pass it and put it on your CMS or whatever it is. It's going to work. As you can see, they're still blue because it has been done before I made my change. But if I refresh the page, I get the red one. So that's totally dynamic in life always. But as I refresh, just for you to know, I just reload as data. All the components were already cached. So that's for free. Coming back to the demo. So you can do this kind of thing, small rendering, like SVG, for example. You can do more complex application, complex, not externally complex, let's be honest. In that case, I made a small contact manager. So nothing fancy. Here, I have a list of contacts. So all of them are fantastic people. Okay. Only this is contact manager. This contact manager is only for fantastic people. So Marie Curie, ADLMR, and me. And you can create new content like John. So here you see we have navigation. We go to another page. We come back to this page. We can edit something. So here I navigate again. So I leave my component giving the list and I change something. I save. Now it's updated here. I can delete. I delete John Doe because it's not fantastic. Yes. Good. So this work here, it could be also displayed in the town tab if we want. That's an application per se. It's a simple one. I'm aware, of course. But that's something you can do. You have an API to store, manipulate contents on the back end. Create a new one, modify a new one, et cetera. And all of the contacts are stored in this folder. So I created it manually. This contact folder. And I decided that all of my data is going to be there. And I have a way to search across the data, collect all of them, display them, et cetera. So that's just like you would do in the ZDMI, right? And if I want it to be somewhere else, that's not a problem either. That's just for the demo I made it that way. What I can do as well is, well, more complex rendering stuff. Like here, I want to display a chart. And the way I do that is by using an external library. So using this import support in my browser, I can import local things like the result itself or like all the subcomponents I might have. I can also import remote ones. This is charge IS. And I get it from npackage.com. I use the ESM version to make sure it works. And then I can just copy past any chart IS example directly here, save. And it works. I don't have to deploy, bundle, blah, blah, blah. All of that is going to work directly into app fab. And I get this nice chart. Here, the data is outcoded and it's absolutely stupid. But it could be provided by some data you have. So you might collect some data from an open data system or whatever. You put all of that in app fab and you can display it. I think app fab is a good solution for for database. Because with database, you need two very important things. I don't know if you're aware, but database is about two super important things. One is abuse. It's data. You need the data. And two, do you know what it is? No? Visualization. Yeah, I mean, that was an easy one. I mean, yeah, you need that. Well, that's exactly what you can do with app fab. You can store the data as jyzen. I mean, it's an object. You put it on guillotine. You might secure it. You might do whatever you want with it. And you can also store the actual component able to render it. So it's all in one. So I know all in one is not the best approach for everything. But in that case, I think it's handy because you can do that here. It's on an app fab server somewhere. And you can publish it everywhere. So that's kind of nice. Another example just because here we have it's it's a graphic showing how many extra kilo I got because of lockdown across months. So yeah, it's interesting. I mean, you should be aware it's important to track it. So yes, that's what I have with app fab. And an interesting thing is that app fab is developed with app fab. So if you are if you go into this app fab folder, you get the actual app fab code. And it's not it's not super complicated. So you have pastana guy in there. Pastana guy is the CSS and fonts, basically, all the SAS variables have been turned into CSS variables. So it's dynamic. So you still have the variable mechanism, but it's all dynamic and on the front end, which is very much a pressure for for app fab, you have a very minimalistic UI implementing some pastana guy stuff like you have a button. So we just basically making the right call for the CSS class for a button. And basically, that's it. You might have, I don't know, an icon super simple as well. And then you are free to reuse those icon button checkbox. And that's how I did this UI just by doing this copy pasting some small part of HTML from the Albert code and patting the right CSS and I get my components. The core of app fab is actually very simple. I mean, it's not simple, but it's small. This is the core GIS library. And this is the main element here, which renders everything basically, the total is 200 lines of GS. So it's very, very minimalistic. The editor itself is a bit bigger because yeah, you have components to display your thing. And you can do fancy like previewing the editor in the editor. It's useless, but yeah, I like it. And, and yeah, exactly inception. So let's go back to my presentation. I gonna skip the video because I was successful with my live demo. Yeah, something important as well. Obviously, sorry, you need, you need any through the web solution to be to be workable locally as well. Oh, no, I come back to the very beginning. Sorry. Plenty of them. Anyway, so yeah, you want to be able to work locally. You want to be able to put your stuff on GitHub. This is critical. This is key for any through the web solution we've been in there so many times before. So yes, it is very important. So you can work locally in VS code or whatever it is. And you have just one comment to sync up or down your content with your app server. And this again is not with peep install or blah, blah, blah, it's just Docker. So you have a Docker command Docker run. It's a bit ugly for now, but I will I will provide a script which just does that get all the options possible and make it a bit more user friendly. Let's say it's also fully documented. Well, when I say fully is not fully, fully for now, but that's my objective. I am working on documentation first. And I have, yeah, I make sure that the code I do is is is consistent with the story I'm telling in the documentation. Why is that because I want, as I said, I was I want people to be able to use it without any pain. So it's about providing something which is simple, but something which is documented. That's vital. Now, what is it good at? I don't know. Really, I have no idea. I mentioned I mentioned the database, of course, probably a good use case. It could be used for small application like the kind of stuff I was doing with Plomino probably. I'm not too much worried about what it is good at to be honest. In my company, I see a point in doing this kind of stuff because we have a platform where we want to customers to be able to plug the system. So there are some backend systems like you can push a processing and it going to transform data. So like you add some kind of custom processing for the data, but we would also like them to be able to push some front end feature. And then how do you do that? Do you say, yeah, please, you just upload your ugly webpack bundle there and we trust you about what is in there? Yeah, not sure. So I like the idea that you can push each component independently with a clear source. It can be recompiled by us if we want. And this source we can read it and make sure it's okay. So it's a probably nice way to do that regarding security and also a nice way for the customers because they have something simple to use. So there is no barriers. They don't have to hire a reactive developer to do it. So that's one of the use cases as well as I have in mind. But yeah, to be honest, I did it just for fun. So for now, I don't have any much idea about where it could go. And yeah, I mentioned this. Appfab has been built with Appfab itself. And it makes sense because, I mean, Appfab could not be absolutely fabulous. It was not able to be absolutely fabulous. And vice versa. So yeah, I'm quite happy about that. And sorry, Ken. Now, final question. Is it totally stupid not to be entirely excluded? I think I've made some very discotable choices in there. And but yeah, as I mentioned, it's 200 lines of GS for the front end, the core front end, and 300 for the back end and Python. And most of it are decorators because that's how you do Python nowadays. So it's not a lot of code, which is a good sign, I would say. It cannot be that bad. Or is it? I don't know. So we'll see. That's where I am. Thank you all. That was it. Thank you, Eric. We did enjoy the history through the t-shirts at the beginning. I went and got myself my own Tone 6 t-shirt too. The question, I don't know if we have time. There were a couple of questions in Slido. We have just a couple of minutes. Mike asked, how do you provide libraries for local import? Yeah, you should unmute. No, I don't know. Seems they can't hear me. Yeah, we don't hear you, Chrissy. Okay. Unless you're not speaking. No, I'm speaking. That's all right. We'll move on to Jitsi. We'll go ahead and do that. So Eric will join Jitsi. Everyone else watching on Loudswarm, go ahead and join the link. I'm going to put it into track one. And thank you, Eric, even though you can't hear me. Go ahead and join him. Thanks. So I go to Jitsi myself.
Frontend techniques improve the user experience, but they damage the developer experience. AbFab (short name for Absolutely Fabulous, as the glorious TV show) proposes to throw away all these (npm)+(node)+(hyper complex javascript frameworks)+(gigantic bundles) and replace them with a more 90’s oriented approach (the 90’s is a period known for good TV shows and for extremely basic web technology). But yet… it is still providing all the frontend power and good stuff (because we are in the 20’s). Built with Guillotina and Svelte, it provides a ready-to-use online platform (should I say TTW?) to create simple and light components (should I say low-code?) that can be published anywhere.
10.5446/56820 (DOI)
Hi, I'm Simon Serre, also known as Immersion. I'm working for SOSHOT, and today I'm going to talk about Optimal Buffer Allocation and Wayland. We've been working with other Wayland developers on a new extension code, Linux EMA Buff, and I'm going to explain why we need it, how it works, and our plans for the future. So let's get started. Why do we need this new shiny extension at all? So I'm going to explain a few details about the Linux and in particular Wayland graphics stacks. I'm going to explain also to present a bunch of examples, cases we need to handle, but quite a few. So let's get started with the basics about Wayland architecture. In Wayland, there are some clients which talk to compositors. So an example of a client can be a web browser, a text editor, a terminal emulator, a video player game. All of these are clients, and they are talking to compositors such as Matter for Oknoomshel, Queen for KDE, Sway, and Western and few others exist also. So what clients do is basically they allocate a buffer and the GPU, then they do their own rendering so functions again can render a scene with a character maybe. When the rendering is done, they hand off the buffer to the compositor and say, hey, please, can you display this buffer for me? So the compositor receives multiple buffers from each client, and then it decides which to show on screen. It needs to paint multiple buffers into one final buffer. This final buffer will be hand over to the kernel, and the kernel driver will program the hardware to display it on a screen. So compositors usually have an optimization called direct scan out. Maybe what happens without direct scan out is, as I said earlier, clients do their own rendering, then they pass the buffer to the compositor. The compositor does the composition step. It's a copy operation. Copy the client buffers into a final buffer, and then hand over the final buffer to the kernel to display it. But in case there's only one buffer to display on screen, for instance, if you're running with a full-screen video player, then the compositor can skip completely the composition step. So it skips a copy. So that's the red arrow here. So it's an optional optimization. It's pretty good because it allows the compositor to lower the battery usage. So if you're on a laptop, it's pretty important. It improves latency, and it leaves some parts of the GPU free for other clients to use. So if a game is keeping the GPU busy, the compositor won't disturb the game rendering, for instance. So apparently most compositors have this optimization implemented for full-screen windows. But a problem with this all looks great and all, but sometimes this direct scan out path doesn't work. So to explain why sometimes it works and sometimes not, I need to go into a bit of a bit to explain tile buffers. In general, when you need to store an image in memory, the straightforward thing to do is to take the first row of the image, put it in a buffer, then take the second row of pixels of the image, append it to the buffer, and so on and so forth till the last row. So that's what we call a linear layout. But there are other ways to represent an image in memory. For instance, it's possible to divide an image into multiple tiles. So here, for instance, the image is divided into nine tiles. And the idea is to, instead of storing the image row by row, it's stored tile by tile. So here the first tile is stored in memory first, and then the second tile is impended to the buffer, and so on and so forth till the last tile. So this is important because it allows the GPU to access the image in a manner which is more convenient and optimized in regards to the cache, for instance. So it's an important optimization for the GPU. So because this has been introduced by hardware designers, of course, each vendor, Intel, AMD, and NVIDIA, each of these vendors have come up with their own way to divide an image into tiles. Oftentimes, they are more complicated. This is the basic idea, but there are more complicated schemes which I use, more complicated tiling, compression, and so on, features like this. And each vendor has multiple ways to represent an image in memory. And of course, each way to describe an image in memory from a specific vendor is not compatible with other vendors. So we end up with a lot of different possible layouts. In DRM, we call these layouts modifiers. We describe the layout with something called a modifier. So we end up in a situation where some parts of the GPU are able to handle some layouts, some modifiers, but some other parts of the GPU are not necessarily able to support all of the possible layouts. So GPU has multiple parts in it. The most commonly understood part, possibly, is 3D Engines. It's usually responsible for running shaders, for instance. So 3D Engines, in general, can deal with a lot of tiling layouts, a lot of modifiers. It generally prefers quite advanced tiling layouts. The display engine and the other end usually doesn't support as many modifiers as the 3D Engines. And the video engine also has a different set of modifiers. So I gave a few examples here on the screen. We are only scratching the surface here, and the hardware constraints when it comes to modifiers are very complicated. For instance, if the screen is rotated, maybe the GPU will not be able to display linear images at all. It will only be able to handle tiled images. So we're starting to see that there needs to be some kind of negotiation. There needs to be someone who sits down and says, hey, I want to use to write an image with a 3D engine and display it with a display engine on the screen. So I need to pick a modifier which works with both of these parts. And I want to pick the best one. I don't really want to pick linear because it's not very optimized for the 3D engine to write into. With that in mind, let's move on to multi-GPU. So multi-GPU is basically when there are multiple GPUs connected to a single machine. So it can happen when, for instance, you have a box with two PC express ports and one AMD GPU and one NVIDIA GPU all connected to the same box. Because there are also some laptops which have an integrated GPU, an Intel integrated GPU, for instance, and a discrete GPU and the AMD, for instance. So here, for instance, let's suppose GPU 1 is an Intel integrated GPU with GPU 2, a discrete GPU on a laptop and battery. So when I'm battery, I don't really want to power on the second GPU, the discrete GPU, because that uses a lot of power. I want to save power. So the compositor decides to use the integrated GPU. And the compositor wants to tell clients to also use the same GPU to save power. So when it's cooked to be some way for the compositor to say to clients, hey, please use this GPU I'm already using so that we don't just randomly power on the second one. And clients need to know which GPU to pick. Now let's say that I plugged my laptop in power. So I don't want to save battery anymore. And I want instead to play game. So maybe the game is a bit demanding. And it would prefer to use the second GPU, the discrete GPU, because it's more powerful. So here the client locates a buffer on the discrete GPU, then does its rendering, then hands over the buffer to the compositor. On that done, the compositor needs to access, read the buffer on the discrete GPU, and render it with the composited with the first GPU. Because the screen is connected to the integrated GPU, the laptop screen. So there's already some kind of multi-GPU buffer sharing going on here. There are also some cases of external GPUs which can be plugged and unplugged, for instance, connected via USB-C. And that further complicates the matter, because the discrete GPU can come and go as a user plugs and plugs. So that's starting to be a bit fun. And there are even more complicated scenarios. For instance, if I have an external GPU and I start to plug a screen and the external GPU. And the compositor may want to also open both GPUs and take the client buffer composited with the second GPU and then display it and then that puts it connected to the second GPU. There are also other kinds of systems which makes things a bit hairy. For instance, arm boards for embedded use cases. I don't know if you've probably heard about Raspberry Pi, for instance. This is an example of an arm board. This is the arm board manufacturers want to have a bit of a fun evening. And they decide to pick parts from multiple manufacturers and glue them together. So for instance, one could have a display engine made by a whole chip or a winner or a M logic and glue it together with a render engine made by arm, a Mali engine or something else. So even though this is a single stock from user space, this looks like a multi GPU situation with two devices. The first device can only display images and cannot use OpenGL at all. The second device can only do OpenGL and has no outputs, has no notion of outputs. Screens connected. So yeah, this can be a bit of a problem. So with all of this in mind, our goal while designing the new Waylon protocol extension was to be able to tell the client which optimal device and format and modifier to use. But it's not that simple because clients may still have their own constraints and preferences. For instance, a video player may be wants to take advantage of hardware acceleration and wants to use video engine. And the video engine comes with some constraints and preferences when it comes to modifiers and devices and formats. So we need to reconcile both the compositor and the client and somehow pick the best combination which is still supported by everything both the compositor and the client wants to use. One thing which makes everything a bit more complicated is that things are dynamic. As I said, for instance, external GPU can come and go. And the number of that windows can move between screens. So a window could move from a screen which is plugged in GPU A into a screen which is plugged in GPU B. And windows can be on the screen like floating in which case direct scan out is not possible for most compositors today. And when the window becomes full screen, dynamic scan out becomes possible. And to take advantage of direct scan out, the client needs to maybe change its device format and modify it. Something to keep in mind is that some clients can't have all of the fancy stuff. Some clients are basic, have some other constraints. And maybe they pick the best device, format and modifier to use at start up and then everything is frozen. They can't change anymore. Finally, some compositors, the old compositors, always need a full back plan. So it's not possible for a compositor to say, okay, I'll always be able to use direct scan out. Sometimes it won't work and modifiers are not, can express some of the stuff which makes direct scan out possible, but it's not enough. Never guarantee that the optimized cut path will always work. So compositors need always to have a way to fall back to composition. So boring, copy and everything that always needs to be read through. Compositors can just take client buffers and do nothing with them. It results in a black screen for users and that's not acceptable. All right, so let's see how the protocol looks like. We've introduced a few concepts. So we've introduced a concept of main device. The main device is basically a good device, good GPU for clients to use if they don't have any other preferences. We've also introduced a concept of preference range. So a preference range is a set of device formats and modifiers which indicates some supported combinations and each set is ordered by preference. So the first set will be the one which will bring supposedly the most optimized way to present an image. The second one can be used as a fallback if the first one doesn't work and so on and so forth. And the last one, the compositor really needs to make sure that the last bench is always usable or else maybe the client won't be able to display an image at all. So the first thing a client will do is obtain a feedback object. There are two ways to obtain a feedback object. The first way is to get the default static feedback object. It's like a global feedback object and tied to anything. And the second way is to obtain a feedback object from a surface. A surface is like a window and way to simplify. So the feedback object will be tied to this surface and as the surface changes its position, for instance, the feedback object will receive some updates, some new events and the client will be able to react accordingly if it wants to. Once the client has obtained a feedback object, it will receive feedback events. So here's an example of typical feedback events. It's a compositor. The compositor first sends the main device event. Here it says if you don't have any other preference, it's a good idea to use GP1. And the compositor will send the trenches. So here we have two trenches. The first trench says this trench is about GPU2. If possible, use format ARGB8888 with the modifier X-style or linear. If you're using formats and modifiers from this trench, I'll be trying to use direct scanout. And that's it. If it's not possible, the client can use the other trench. The second trench says this trench is about the third GPU, GP1. If you're using this GPU, you'll be able to allocate a buffer with the format ARGB8888 with modifier Y-style or X-style or linear. And I won't be attempting to use direct scanout if you're using this trench. And that's pretty much it. So let's see how clients are able to handle these events. There are multiple types of clients. Let's start with the basic baseline support the clients could have, most minimal stuff. Such a client would use the get default feedback request to obtain a global static feedback object which won't really change. And this client will open the device indicated by the main device event. And for each trench which is about this main device, the client will try to allocate a buffer with format and modifier specified in the trench. So here, the client will receive main device GP1. He will see a trench about GPU2. It will ignore it completely and it will directly have a look at the second trench which is about the main device. And it will allocate a buffer with format specified here. Let's continue with a bit more sophisticated client. There are other types of clients, for instance, clients using Vulkan. These clients already have specific GPU selected, pre-selected. For instance, games often have a menu with checkbox to select which GPU to use. So these clients will call the get default device, the feedback to get a static global feedback and they will completely ignore main device because they already have selected one. And instead of, I'll go back to the example. Let's say the client has selected GPU2 at the start-up and we will only look at trenches about this pre-selected GPU. So here, the client will only take it into account the first trench and completely ignore the second trench and it will do the same as the first client by one of these formats. One thing to note is that maybe the buffer input to the Compostor will fail because in multi-GPU situations, the client selects device which is different from the GPUs used by the Compositor. Maybe the buffers won't be able to be accessed from the other GPU. So clients need to keep this in mind and handle this gracefully. Having a client even more complicated will be able to dynamically change its buffer format and modifier. So such a client will on the request get the feedback object tied to a specific surface. So as a client and start-up, it will look at all the events as usual. But the feedback object will send new events when the window is moved, for instance. So clients will listen for these events and we allocate its buffer as needed. So just works the same, but clients is able to dynamically change its... we allocate its buffer and there's the kind of mix and match where some of the even more complicated clients will be able to dynamically change the GPU they're using. So that user gets the best feedback, selects the main device and then listens for events and if the main device changes, they can close the old device, open the new device and then we allocate all of the buffers. And as before, buffer imports the Compositor can pay. So what's the current status of the protocol and what are the next steps? Implementation wise, we're pretty good, at least on the compositor side. All of Mutter, Wroots, Weston and Queen have the protocol implemented. So yeah, it's already possible to use it to the at least in the Git version of these compositors. In terms of clients, we only have a single implementation right now that I know of in Mesa, a GL subsystem, so this means Wayland native open GL clients are able to benefit from all of this stuff and they're able to dynamically change the modifier depending on whether the window is full screen or not and stuff like this. So that's pretty cool. There are still some things to be done, for instance, we're still missing Vulkan support. There's a mesh request open for this, they still need to review. We maybe something worth doing would be to add support for XYLand. There's some early patches for this, but yeah, we'll see what goes because we need to plan this down to the XYLand protocol also. So yeah, we see how it goes. And also other clients could benefit from it, like maybe VIP clients, G-streamer, Codie, stuff like this. Then also which could be improved is multi-GPU support in existing compositors. Right now, the compositors, all compositors that I know of have basic support for multi-GPU, but they always do the composition step with a single fixed GPU that cannot change at all. And they don't support direct scan out and secondary GPUs. So that would be something to improve. And they don't support either multi-GPU composition. So if a game is running in discrete GPU and then is displaying a window which is shown and a screen plugged into the discrete GPU, the same GPU, the compositor will still import the game buffer into the integrated GPU and then export it again to the discrete GPU. So just for the composition step, so that's not great. Also as I said earlier, it's not possible to express all those hardware constraints with modifiers. So we're still missing some work here that could be done to optimize even further buffer allocations. So we've talked about this with James Jones from NVIDIA and we have some ideas to improve this but see my existing 2020 talk about this for more details. So that's what's in my chat. Here are a few references you could look into if you're interested and feel free to discuss as question and weigh-in in an IRC. I'm going to do a small demo of all of this. So Western is shipping with a demo called Western Simple DMEber Feedback. It will basically act as the third kind of clients we've seen which dynamically changes its format and modifiers. So I'm going to run a window to open a window. Then full screen it. We're going to see what kind of events we see from the compositor. Window is opened, I full screen it and it's closed. So what happened? Here we have a log of all of the events that happened. So the client first received some events with the main device set to this device. So the first render. I only have a single GPU so things are a bit simple. And the compositor says I cannot do direct scan out. This is because at first the window was not full screen so that's expected. So with this first bunch of events, there's only a single branch. And that's it. And once a full screen the window, the compositor sends another burst of events. And you can see already that the first branch is different and has a scan out bit set to true true. That means the compositor will attempt to use direct scan out if the client uses when the format modifiers from this branch. Oh, I missed something here. Oh yeah, here. And we can see that in this burst of events there are two trenches and the second one, so the first one is to use direct scan out and the second one is to not use direct scan out. So in case one of the buffers above in the first range wasn't usable for the client, the client can always go back to the composition branch. And that's pretty much it. So if you have any questions. Now is a good time, I believe, to ask them. And thanks for listening. Hello, so we have Simon with us here live to answer all your questions. First off, thanks for your presentation and for all your work on the Weyland ecosystem. It's greatly appreciated. And let's start with a question for me. So do you have any updates on the presentation? Has anything changed or something else come to your mind that you haven't touched on in your presentation during the week because this was recorded over a week ago? You should probably unmute yourself. You should unmute yourself because you're still muted in Jitsi. Oh, sorry about that. Okay. All good now. Yeah, so yeah, thank you. For more context, yeah, I've been doing this presentation in a bit of a hurry. So I may have missed something and I wanted to do a bit more, but oh well. Maybe something I didn't mention quite explicitly is when I was talking about different parts of the GPU and I was planning that there is a 3D engine, display engine and video engine. So when I first got into graphics, I didn't really understand that the GPU was split like this. So the 3D engine is responsible for drawing the frames, for running the shaders. And the display engine is where you plug your connectors, HDMI and display port and all of this stuff. And the video engine is for video decoding, hardware accelerated video decoding. And yeah, it's important to understand that these parts of the GPU are not just one chip, they are multiple components driving these parts in the kernel as separate. So yeah, that's something I didn't quite mention explicitly. In terms of updates, Mesaveulcan is still installed because it needs a review time and review time is very valuable. X-YLAND has seen some activity in the last few days. Someone from NVDA has been working on it, so that's pretty cool. As I explained, we need to update the X11 protocol as well. We should be good, needs a bit more time. Yeah, I think that's pretty much it. I have some more material for the NVQ&A, but maybe we can start with questions now. Okay, yeah, sure. So we can wait with the material a little bit later. And the first question from the community is from Anunes. And the question is, what is the status of the clients using toolkits like GTK, QT, do they already benefit from this work through just Mesa EGL? Yeah, yeah, the good news is that as a client developer, if you're just using EGL, you don't need to do anything to benefit from all this work. The implementation in Mesa will allow most applications to just work out of the box. And I've been hearing reports that applications like MPV are just working fine and benefiting from this already. So you can check yourself by setting the Wayland display equals one environment variable before starting an EGL app, for instance, and check all the messages related to a DM above. And maybe you can have a look and check that it's used. But yeah, it should just work. There's an extension to the same question by the same author. So how about some more complicated applications like Firefox? Oh, so yeah, Firefox is a bit of a more involved case because I think it just uses low-level EGL. It doesn't really use the rule. So I think it manages the buffers itself with fine buffer objects. I don't remember exactly. Yeah, we need to investigate a bit more. I don't know, a friend. I know someone who maybe knows better. So I can poke him later. Another question is from the underscore ED. The clean needs to be aware of the GPU changes. How abstract that this is in Mesa? Yeah. So GPU changes are more involved because by default, most clients will just select a GPU and start up and then just use that. And so EGL and VLCAD clients won't work for dynamic GPU changes. So if a client wants to support dynamic GPU changes, it needs to listen to the Wayland events directly. And it can do that. So both Mesa, EGL, VLCAD and the client itself can listen to the changes in power advancement problem. So the client will need to tear down the old graphics context and recreate it with a new device manually to implement this thing. Okay. Thanks for the answer. And another question from Ismail. How would this play with multiple scannered buffers? Some cards display engines. In theory, can display multiple buffers on screen at once, effectively doing some composition themselves? Yep. So this should be, so most of the support for KMS planes, so deploying multiple buffers at once with direct scan out. Most compositor don't support it apart from full screen. So the only exception right now is Western. Western can do this. And with the extension, this new Wayland extension, it should just work. Western will send feedback to the surfaces, to individual surfaces accordingly. So extension doesn't need to be changed to accommodate for this new use case. But compositors need to be worked on so that this can be made to work more on motor, on KDE and WL routes. And this is something I plan to work on, at least for the WL route sites with things like LibLiftF that have been working for a few years now on. But we'll see. So so far that was the last question I have over here. And thank you for all your answers. And you've said that you have something extra for us. Yeah, yeah. So there are a lot of things I left out in the talk. Let's see. So one of the things I left out is that we designed a bunch of more technical stuff around the protocol. Up to now, the protocol sends the list of available formats and modifiers via directly on the wire and the unique sockets. It just sends the whole list of formats and modifiers. But it turned out with some AMD GPUs, for instance, there, or NVDA also, there are a lot, a lot of modifiers. So the list was getting pretty large to like multiple kilobytes, like, I don't know, 10 kilobytes or something. And for a single way on the message, this is too big. So we change that. And now the compositor will write to a shared memory file, the list of formats and modifiers, and we'll share this file in written remote to all clients, just like it does for things like the keyboard, the key map. So there should be also a performance improvement when you launch a client because the compositor won't be broadcasting all of the lists all the time. So that's when improvement. Maybe you've seen from the talk that we try to handle as many use cases as possible, and there are many interwind use cases. And it's really been a challenge to come up with a design which work well with all of these use cases. And something I haven't mentioned is to adjust another layer of use case is that some we also wanted to support drivers, which are legacy and don't support modifiers. And this was a bit of a pain because there are many problems with that. In particular, when it comes to multi GPU buffer sharing, if you don't have the format modifier, which indicates really how the pixels are laid out in memory, then the driver will just choose one implicit format modifier. And you don't know about this format modifier. And if you share this image between two GPUs, maybe the implicit format modifier used by one GPU will be different than the one used for the other GPU, and you will end up with a garbled image because both GPUs don't agree on the tiling to use. So yeah. We support these old drivers, mainly old AMD cards, but need to be a bit careful about multi GPU situations mainly. And yeah, to add on a bit on the next steps, also something to be worked on would be a better explicit synchronization. Right now there's a Waylon extension for explicit synchronization, but it's using sync files and it's exchanging file descriptors. Each time a frame is displayed and screened. So that's not really efficient. So we'd want to move on to something like DM sync ops, and that's more work. Yeah, a lot to be done yet. That's about it for me, I think. Yeah, so thanks for the extra content, and we have two more questions. So is this multi GPU configuration also applicable to virtual GPUs, like provided by SRIOV? I don't know a lot about virtual GPUs, so I'm not sure. I really need more context to be able to tell, so we can discuss afterwards if you're interested. Okay, and another question, not directly related to buffer feedback, but what happens when you run a full screen client with direct scan out and then notification from a different client pops up? So, yeah, planes. So if you're lucky or if you have the new Waylon extension, when there's a full screen client, it uses direct scan out. If a notification shows up and you're running not a subnome, KDE or sway, then the compositor will fall back to regular compositions, will fall back to open GL for compositing. And we'll stop doing direct scan out because it won't be able to display both buffers at once, but hopefully in the future, we'll be able to support direct scan out with multiple buffers at the same time with multiple KMS planes. But as I've said earlier, it's something that we need to implement in compositors and the extension already works for this use case. And if you're using Western, it should already work as is. And Western should be able to display both the Firefox and the notification on top with direct scan out. Okay, thanks for the answer and the last question seems to be, it's prefaced by clarification that it's a very new question. And you said Linux DMA buffers implementation of MSI EGL. And the question is, are MSI EGL and EGL streams the NVIDIA one? The same thing. So one thing to be noted here first is that EGL stream is an EGL standard. It's an EGL extension. And the NVIDIA implementation of EGL is not called EGL stream. It's just called the NVIDIA proprietary driver. So I guess the question is really, is this extension implemented in, is this new extension implemented in the NVIDIA proprietary driver? I don't think it is right now. But I don't know. Yeah, I don't know. You need to ask NVIDIA engineers about that one. Or maybe wait a few weeks and see if it comes up. OK, I didn't see any more questions for now about the two people typing. So yeah, wait for that. And yeah, thank you again for an awesome talk and awesome Q&A, extra content and all that you have done for the community. Because like your work on Wayland is just outstanding. Yeah, thanks. Glad you liked it. It's a lot of work. But also it's very time consuming and energy consuming to work on Windows system integration in particular, because you need to make people agree and there's a lot of politics in it. And not a lot of people have time to work on it. So with all of this, it's quite hard to work on this stuff. So we have a follow up for one of the previous questions. I assume the latency for the direct scan out is lower. So there is a glitch when switching back and forth. So does it happen? Is it visible or noticeable when you switch from direct scan out to not direct scan out when something pops up? So everything is implemented correctly. It shouldn't be visible. So there is a difference in latency in that with direct scan outs, when so the compositor basically needs to do a frame every 16 milliseconds or so. So with our direct scan outs, the application has less time to render because the compositor will need to also have a render step. But with direct scan outs, we skip this step and the application has more time to render. So it kind of depends on the situation. If the application already isn't taking a lot of time to render, it should be fine. If the application is like right under the limit is like taking maybe I don't know exactly 16 milliseconds to render, then you'll see the application will be fluid if it's using direct scan out and then it stops using direct scan out. Maybe it will become a bit choppy. But I don't think we can do a lot about this. It just if we can't use direct scan out, we just can't use the optimised file. And yeah, there's no saving grace here. So maybe in some situations, we are a bit of a glitch, but this extension should just try to make things as not choppy as possible most of the time. Okay, it looks like there is just one last question being typed out right now in the chat, I assume that's the question because someone is typing. Yeah, those were really good answers and good questions. And it feels like your presentation was definitely one of the most popular ones here. So that's cool. And now two people are typing. So that's going to be interesting. Yeah, while they are typing, maybe I can give a few words to thank Colaborah. Colaborah has been helping a lot on this. Because Leandro and Daniel Stone, they've both been providing some Western implementation, providing a MESA implementation. So they've been pivotal in making this happen. So really thank you for this. Okay, the last thing actually wasn't a question, but thank you for the answer because the person who asked about EGL just read, seen the word EGL used when talking about Nvidia Preparator driver and MESA is also using EGL. So it was confusing. Cool. Okay, we have still five minutes and there are still people typing. So any more shout outs? Oh, also we have Luke now here as well. Thank you, Eric, for organizing all this, therefore, I guess. Maybe now is a good time to say it. Well, thank you for speaking and thank you to all the speakers today as well. And most of the time, well, thank you for Eric and Martin for being the other different organizers, but most of the time should go to the FOSDM developers, the FOSDM organizers who spent the last months in a mad dash as they usually do. And they will probably go in hibernation for half a year again as they do every year. And then somewhere in August, they will reemerge from their sleep and they will start working on hopefully the next real life FOSDM that we don't have to do a virtual one again. So yeah, everybody get vaccinated. It's the best hope we have to have a proper real life FOSDM again next year. And thanks for watching from my side at least. Thank you. Okay, so that was the last question. The only other things that were typed in the chat since then were thank you for your presentation and that's it. So I guess we can wrap it here. Thanks to all the presenters. Thanks to all the attendees and hopefully see you next year in the Meet Space. Yeah, sure. All right, all right. See you. Okay.
Wayland compositors try to make of most of the hardware by displaying directly without any copy pixel buffers coming from clients such as games, browsers and video players. This lowers battery usage, improves latency and leaves the 3D engine free for clients to use. The display engines found in modern GPUs often can support this zero-copy mechanism only if the buffers have been allocated in a special fashion. However, buffers allocated this way won't be optimal for rendering, and only a handful of buffers can be directly displayed. As a result, a trade-off between zero-copy display and optimal rendering needs to be made. The compositor is the natural place where a decision can be made, because it has a global view of all apps which need to be presented. Once the compositor has taken a decision, it needs to be communicated to the clients. The brand new linux-dmabuf feedback protocol enables this negotiation between the compositor and the clients.
10.5446/56822 (DOI)
Hi, I'm Hyunjun Ko, working for the graphics team at EGALNIA and currently working on the TURNIT driver. And today I'm going to talk about the status of TURNIT driver development. I'm going to start with contents for this presentation. First, I'm going to talk about what TURNIT driver is about and then the history of development for this driver, especially focusing on what's achieved at last year. And then finally, I'm going to talk about the plan for this year. Okay, here we go. What's TURNIT? TURNIT is the code name of Qualcomm ADM GPUs open source Vulkan driver. It is a reverse engineer driver. As you know, Qualcomm delivers their own proprietary driver for Vulkan and OpenGL on ARM architecture, especially for Android, as you know. But there are also happy efforts to provide open source GPU drivers. TURNIT, by reverse engineering, and TURNIT is a part of these efforts. And I'm going to talk about the history a little bit more in the next slide. TURNIT is being actively developed on Mesa Community. That is an open source project for graphics implementations of OpenGL and Vulkan for various GPUs. And that is serviced on free desktop game lab for now. And you can find the link in this slide, and you can visit it. And I think that with TURNIT, people working for Igalia, like me, and people working for Google, and working for others. And I think I can have a chance to mention these brilliant people later in this presentation. So let me talk about the history before 2021. First of all, I have to mention Fidreno. That was created by Rokka around 2012. That is an open source OpenGL driver for Qualcomm Adreno GPU. He's been developing Fidreno since then, and he kept improving the drivers. Nowadays, it's almost the same as proprietary drivers, as I heard, which is great. And his work has affected TURNIT, because TURNIT shares a lot of kind of Fidreno infrastructure, like compilers and DRNs and such things. The Vulkan driver development started in August 2018, and at that time, TURNIT became a reality. When Igalia started contributing to TURNIT at the beginning of 2020, and from that time, we targeted Adreno 630 and 650 GPUs. As far as I remember, TURNIT was usually already, but lacked support for many Vulkan extensions and features, which are required by real-world graphics applications. Which means that there were a lot of things to do. And to be honest, from FAMASA's nightclub. And now, I don't know, for 2021, TURNIT was dramatically improved by very talented people. As you see the names in the slide, they are very real and very well-known in the community, and I must say thank you all of these people. So, now, I'm gonna tell you about what's achieved at 2021. First, we implemented lots of Vulkan extensions. As you see in the slide. Two implement Vulkan extensions. First, we need to investigate into what proprietary driver as a portable Android. This is the point where we start to implement new extensions. It's like, first, we got the whole list of the portable extensions on Android by a proprietary driver. And pick one of them, and you have to get a dozen of them coming to stream for the extensions, and analyze them, and implement it on TURNIT based on them. It's the way to reverse engineer. And this kind of reverse engineering is generally hurt and make people sometimes crazy, and sometimes even happy when something new. And this is how TURNIT driver is being improved, generally. And also, we did a lot of both fixes that were found by Vulkan CTs and PGLI and other these kind of test suits. And as a result of these efforts, including the Vulkan extensions implementations, TURNIT got finally Vulkan 1.1 conformance for the Lino 618 GPUs last November, which is a great job. And last thing is we tried to make it run for Windows games with DXBK and VKD3D on Linux ARM that requires x86 emulators. And we've seen some Windows games running, and in this way we were able to fix some bugs in the driver that have been existing in the driver, which cannot be found just by running test suits. And that's why this is a important thing regarding driver development. Now, let's talk about the plan this year. I think we can keep working on it to improve many things, especially we are going to start focusing on real-world use cases like playing computer games on Qualcomm devices, running on Linux. Actually, we started around the end of last year. And as I said, we have seen some Windows games running already, but we will keep trying to run Windows games. There are a lot to go. The most important thing is that there are still not enough games running on ARM architecture, as you know. That's why we are trying to run more Windows games instead. This is also important because we cannot wait for games running on ARM architecture for a long time. We have to do what we can for now. Running Windows games is a good way to see the status of the driver in real-world use cases. That's why we are trying to run Windows games even with emulators, which is bad. Next is performance. Performance is always a key thing for driver development. It's always tough and hard. We have to reverse engineering more and more to improve something and to figure out something unknown. It has been always tough, but we have to keep going. Another big thing is new generation GPUs have come, called Adrenal 7 AXX, maybe. This should be exciting because as far as I heard, they may start supporting shining features like mesh shaders and ray tracing. This will be very interesting for us regarding driver development. Last thing is also worth mentioning here. There are still unknown instructions, which means we have to do more reverse engineering to figure them out. I guess most of them are useful for computer shaders, but that is not our main focus for now. I know we need to do it sometime. That's all. Thanks for listening. Hi guys, welcome to the Q&A session for the turning of driver development status. After threatening our users that I'm taking this young hostage to ask him some personal questions, we did have some questions. Let's start with the first one from Marta. Have there been some particularly difficult things to reverse engineer or implement in the past year? Yeah, as I said in this presentation, everything is tough when you're a reverse engineer. I think I can choose one thing that I, which is that when I work on FPD 16 supports on Friedrich and not Bergen though, but when I was working on FPD 16 supports, I had to figure it out how hardware is working. That was very tough. As far as I remember, for about three months or more, I am investigating how proprietary driver is doing for FPD 16 supports. Yeah, that was very hard thing in this development history. What particularly blocked you from getting further with the support? Anything particularly interesting? Yeah, I mean, FPD 16 supports is the hardest thing for me because we have to know to support that. We have to know how registers in this hardware are doing, but we have no documents and we are totally blind about that. So we have to investigate into what proprietary driver is doing and by running some examples on Android and dump the total overall commands, stream commands, and analyze them, and we have to figure it out which registers are supporting 16 bit or not. This kind of thing is horrible and that was very tough. Sorry, I don't hear you. Luke? Of course not because I was muted myself. So a question from Eric, which is probably for his own use as he's actually also working on Proton last I heard. So I've noticed as what he says, VK valve mutable descriptor type on the support extension list. I thought it's mainly for Direct3D12 implementation on top of Vulkan. Do you know where there is any reason for getting it implemented? Is there any use for it on the ARM world or was it just for free? Yeah, because there are some efforts to run in the game zone on devices, including Arduino devices, Qualcomm devices, I mean. I heard that some people trying to do that have been having to have a hard time. So I heard that and they need this extension is necessary for running their components like a VK D3D or DXVK, which is to run window games on Linux using Vulkan. And so this extension is not that hard to support it because basic things to support this extension is already done at the moment. Yeah, so I decided to kind of expose this extension support. So yeah, that's not that easy. So yeah, it's not hard to work on this extension. Okay, then the final question from Adya and one of your colleagues, I think. Does Zinc work on top of Turnip? If not, are there any plans to make a help? Yeah, yeah, I think as Adya and explain, I think it is to run OpenGL applications on top of Vulkan. And of course, we have already tried to make it run and I think it generally runs fine with Zinc and you have to come from, when you want to come from it, you have to install 68, 86 emulators and maybe DXVK and Qualcomm devices and there are very, a method of setup you need. So it's hard to set up all kind of software attacks and there is no documents about that. But I think we have to make, we have to write official documents to run something like that. Yeah, because it's very interesting thing for gaming people. So yeah, my answer is it runs generally. I don't hear you, so you look. Me and you think, right? So thank you for all this. This is the final question we have from our users. Is there anything you want to still add to your talk or anything you want to still talk about, anything that you might have missed when you recorded your talk? No, but it's quite an experience to present this thing. Have you ever been to live in Faslan then? Have you had the chance to join Faslan when the world was still okay? No, it's the first time. So since you're working for Igalia and Igalia has usually a big presence at Faslan, at least in the graphics dev room, then I hope to see you there next year in the Fletcher. Sure, I hope so too. Then thanks very much and see you. Bye.
- Explain what turnip driver is about and who are working on it. - Explain what is achieved in 2021 on turnip. - Brief plan for turnip development for 2022.
10.5446/56825 (DOI)
My name is Christian and I'm going to talk to you about containers in HPC, the 2022 edition. I had a session last year about containers and I talked about runtimes and engines mostly. So if you haven't watched the video, there's a link down there below, the short URL at the bottom right-hand corner, the recap is that I recommended still true today that you should not fight over runtimes anymore. They all do basically the same, right? And what I would suggest is to insist on SCI runtimes and image specs so that you have a common way of distributing images, starting images, and that we all agree on certain standards and not create our own runtime and image specification over the place. I think that works best. Okay, let me give you a brief overview. So the challenges for high performance computing, I always like to use this kind of slide that shows it in different stages and with an example that is a TensorFlow AIML use case or distributed AIML use case at the end. If we start from the very beginning, we would just use a Docker container without any contact or requirements on the host itself. So it's a single node total isolation container. The data comes in or is attached as a volume. Maybe the data started from a three, but it's still it's not shared across many containers and not shared across money nodes. It's a single, a single node setup with an unshared file system. And this is where a container got started seven, eight years ago, right? So you create a container that's isolated from the host, just interacts with the kernel to access CPU memory and IO. And as such, it's super portable between different hosts because you only talk to the kernel in this case. So you can build it anywhere and ship it and then run it anywhere you want. Of course, once the AIML folks got to know that accelerated computing is the thing, then they wanted to use GPUs. We have the same basic container, but what we add here is we use device path through, which is basically just a mapping from the host into the container. And NVIDIA was pretty fast to jump on the bandwagon here. They created a simple version first, and now it's matured into a very stable OCI hook that is able to figure out what the container needs and maps in the devices and make sure that the drivers are ready and so on. So that's pretty mature. But it's still it's a single node. It's the storage is isolated. We just map in the GPU. And this is, I think, where a lot of people maybe stop. But if you work in a group and you want to share data amongst multiple people or multiple groups, then of course you need to go one step further into this intermediate step where you still have a single node. You still have the GPU, but you have a shared file system and the POSIX file system that is, right, where you maybe have your data in your home directory. The home directory is controlled by your POSIX user ID or you have a shared file system like Luster that is attached. And what this means is that the usual way containers work is going to fail to some extent because if Alice has a user ID 1000 and Bob has user 1001, then due to the way that the container is architected or the container ecosystem is arranged, you are able with Docker to tell the Docker runtime to be someone else. So you can actually control what user ID you are within the container, which is as it is designed. So as Bob, I could say, I could pose as Alex user ID within the container. And in case I have shared file system mounted like the home directory or project directory, then this will fail and this is kind of a bad thing. And if you want to know more and how this relates to the different runtimes, I really recommend the talk from last year because I go into a little bit more detail here. So this where a lot of HPC centers or a couple of HPC centers brought up their own HPC runtime and to mitigate this problem and what they did was creating a user land runtime that operates under the same context as the user. And so you don't have this break of permissions because Bob can only be Bob no matter what the container wants. And that's kind of the way how this was mitigated. Another intermediate step is that usually the NVIDIA driver or the NVIDIA runtime makes a lot of good choices. But if you have maybe more sensitive workloads, then you also want to control the user and the kernel driver, user land driver and the kernel driver on the host and in the container respectively so that you have stable results. But that's an advanced GPU mechanism. It also applies of course to network cards which are then used in the advanced stage. In the advanced stage, we don't want to just use one TensorFlow container but multiple and to make this work, we need to get access not only to GPU but also the network card, maybe 8.5.0. And EFA, what have you. And we need to somehow orchestrate the different containers, maybe schedule them with SLAM for instance. And we need to make sure that the MPI domain can be split up, like that we have connections, that we have TCP control flow set up, that we have namespaces set up so that the MPI ranks can talk to each other. And that's the advanced use case of course. And this is where we talk about hyper-forwards computing. So MPI SLAM, shared file systems and device pass through that constitutes actually HPC. Okay. And I will move that down again. Sorry for that. Here we go. Maybe I move it up here. Okay. So another thing that I want to carry over from last year is the segmentation of the container ecosystem. And thinking about this in this terms, I think it's something that we also embrace because that's something that we also use in the workshop that the IC holds every year and hopefully this year as well. So the bottom layer which is on top here is the runtime. The runtime is just in quotes a single, like a binary that creates a containerized process which at least an isolated file system view. Like think of it like change route, right? You change into a file system, new file system directory and then you are able to, like, you have your own view of the file system. And that's what runtimes do. Like run C, C run, run C is written in go, C run is written in C, Uq is kind of new and it's written in Rust. And that's an interactive process. So you start this and then off you go and you will have a process that is containerized. Then last year's talk goes into a little bit more detail here or a lot more detail and I also have on the back of this, I have a link to a workshop that also explains how this works so that you can step through it. The runtime expects a file system to launch a process in so it won't handle like image pulling or extracting or such and this is where the engine comes in. So the lifecycle of the container image and containers, multiple containers on a single node is done by the engine, like container D-series, potmancy would be like an example. It creates a snapshot of an image. So if you download a Docker image from Docker Hub, like Ubuntu one, then those layers get squashed into a file system that the container can start in and this is then handed over to the, together with configuration, handed over to the runtime. On top of the engine, because the engine is only concerned with the single node, we have a scheduler, simple one would be Docker swarm or maybe if you are a little bit more advanced on the enterprise container side than Kubernetes, but it could be also Slurm. And this orchestrates container placement and in case of Slurm and also make sure that you have gang scheduling going on that the containers are in close proximity that they use the same, the correct resources. If you want a GPU that are GPUs, president, so on and so on. And I think this is maybe the first line where you will arrive, right? You want to run containers with an engine and then you want to schedule those and I think Slurm is a good example of this. But once you get to know containers, you want to build those for yourself or by yourself. And one of which is I think one guidance is to create container images. You should use some reproducible way and I will talk about this a little bit in a later slide. So there are tools like spec which have built in containerization efforts. So you can do a spec containerize of a stack and it will create Dockerfile that you can then create, use to build a container. You can also of course use easy build or to generate a software directory that you just containerize. It's also no big deal. There is HPC container maker. There's a C missing from NVIDIA that abstracts the Dockerfile creation and creates like HPC and AIML stacks. And you can like anything you can install software with you can potentially use to create containers of course. And once you have this container image, you need to distribute it and usually you use some registry like Docker Hub, ECR, GitHub, Container Registry and there are a couple of more registries. That's basically a tool that creates or provides an API to distribute this OCI container image. And a lot of discussion is always going on with like how, what kind of container we should push up and have people reuse. In my opinion, like the OCI container image with multiple layers is a good way of sharing images because if you have the same base and you just install another version of your application like Romax, then there's a lot of sharing that can go on. And if you squash everything, then the sharing aspect is gone. So I think for distribution aspects, the OCI layers are a good thing. But of course, if you're on an HPC system, then you want to like have one squash FS or one bind mount that or loop mode that does not have layers because you want to speed and you don't want to deal with the entire systems. Anyhow. Okay. So let's move on. I think that's just a broad overview of the segments. What I want to continue with is some do's and don'ts for 2022 so that you get some guidance of what you should look out for. The first one is, and let me move this down again. The first one is do not focus on runtimes. And I think we discussed this last year as well. I think it's a pre-COVID discussion. Runtimes are doing mostly the same. And when we talk about runtimes in this thing, then people often look at seros and singularity and call it a runtime. But in essence, it's a squashed thing. If you want to run a container with seros or singularity, you do singularity, run seros run. That's maybe what people refer to as a runtime. So legs and clippers. But anyhow, they are downloading an image, making the lifecycle of container images possible. And then they execute a process or create a containerized process for the user under the user's context. This is true for all of the runtimes there are. All the run engine and runtimes there are. So I think in my view, I won't fight over runtimes anymore. And as an end user on an HPC site, I suppose you don't have a lot of choices anyways. You just need to go with the runtime that is installed at your center and whether it's singularity or seros or whatever, you just need to go with it. So making sure that you are able to use multiple runtimes I think is something that people should look for and not like focus on a particular one. Because once your environment changes, then you need to make sure that you can mitigate the change by using a different runtime. And again, OCI images and OCI image specs, I think that's the way to go here. Just a little overview that I thought would be helpful about runtimes. As I said, there are OCI runtimes, the usual one, the first one was RunC, which is a Golan runtime. Then someone created C run just because he or she could or can. And it's a little bit faster in some cases according to some papers. There's a new runtime in town that I read about a couple of weeks ago, which is Yoku, a Rust-based runtime, which is kind of fun as well. So either of those should be doing the same thing just with different languages. Then we have engines that use those OCI runtimes. Just container D, Podman, they all create a file system and a config file that can be consumed by one of those OCI runtimes. And then I had a discussion with the community. I'm not sure how to call it, but engines with bundled runtimes, I think it's what I called it here. For instance, like Enroot from NVIDIA has a binary that's called Enroot switchroot to actually create the process. Charlie Cloud uses CH run and shifter uses shift run or shifter run, a binary that is executed when you want to create the container. So they are not OCI compliant runtimes or they are not using OCI compliant runtimes, but something else. And yeah, that's a debate we have in the community. I really like the OCI aspect because it allows us to use OCI hooks and make sure that everyone is synthesized, but of course people have opinions and it's fine to have their own runtime. And as an example here for an all in one one, and I think that's true for a single-arrot and app container, they bundle everything in one binary, which is also okay, but it's even less transparent in this way and if someone comes up with a cool OCI hook, then I think with those runtimes, then it's harder to come by. Okay, another one that I think is also very important is like do not start your container journey on Kubernetes unless you are born in Kubernetes. So if you're new to containers and if you want to know and learn, then please don't start there because if you look at this chart here, this overview that only shows the container ecosystem from a Docker's perspective, like from different platforms, clients, APIs and so on, that's already convoluted and hardly like most of this stuff or a lot of this stuff is not relevant for HPC runtimes. But if you look at the Kubernetes overview, like the cloud native landscape in 2022, that's like a zillions of different projects and tools and I think if you get started or dumped in this, then you are not caring about this container runtime piece anymore, but you look through the landscape and there are so many nice and interesting things you can look at that you will be distracted from the core of what you should learn first, which is runtime and engines and how to start containers and then how to build them. But then we can talk about tracing and logging and all of this. So please don't start with Kubernetes is what I would suggest. Anyhow, if you know, learn, then do get started there. It's pretty easy. What I mostly do is I use Serous or some other HPC runtime and I just pop in the container and this container like runtime, start script like Serous run for instance, and I just extracted or changed the MPI run command to use the container instead of a native binary and that's pretty straightforward. And even more, like I haven't played around with this much because my slurm is a little bit outdated, but with slurm 21.08, there's even a native way of using containers within slurm. The thing to note here is that you need to prepare your OCI image beforehand. So that's not something that slurm will do. Slurm will not download or in image and extract it. So that's the lifecycle of the container images and the snapshots needs to be done somewhere else and that needs to be integrated. But you can use this with srun and s-patch, so I think there's a lot of cool things you can do. And it's using an OCI runtime, so that's I think also what I really like. And I think to explore maybe from my point of view for 2022 is to play around with this a little bit more. Do build your images with automation or at least do it reproducibly. So please don't and I think hopefully not a lot of people do this anymore. Like start a container with Docker run and then install stuff within the container and just Docker commit the fire system to a new layer. I think that's hopefully not what people do. What I usually do is I use maybe spec for instance because spec has a nice containerize command and I create this artifact that defines how my container should look like, like install gromex with Amazon Linux 2 and off you go. And then I create the Docker file and build the container on the host. And yeah, that's kind of my workflow and you can do this with in GitLab or Jenkins or whatever CI CD you like and this creates a very reproducible way of creating containers. Another thing that came up when I talked to build the other day was do not think about simple or do think about simple and complex use cases. And I think that's kind of an interesting one because a lot of times often like an application developer might only think about the simple example of this one container that he creates, let's say gromex, but in real life, of course, you are embedded into a workflow of a certain discipline or you are embedded in the workflow of a certain group or so. So maybe your application or your simulation needs interactive monitoring because your system does not converge and it will run forever if you are not kicking the tires and killing it. And I think this kind of how it's used in the real world, that's something that people should take into account when they create container images. So not just look at the simple example or maybe you use case, but also try to anticipate how others would use it. And also on the spectrum of how fast can I get started and make the full workflow go, like how is my research productivity versus how much optimization is done to the containers so that it maybe only runs within FinnyBand or that it only runs on Ice Lake and not on Sky Lake. So of course, we need to get the most out of a certain type or certain architecture, but at the end of the day, if it takes two weeks to set up the container and the container runs for two days, if it's a simple one that maybe runs for four days, that is not super optimized but still works, then the one, the researcher that uses the simple will be faster to the result because he doesn't need to set up everything into two weeks, but he can just get started easily. So what I'm saying is, please make sure that you weigh productivity and optimizations and you don't go super deep on optimizations without thinking about the productivity involved. And related to the building pieces, do that thing or do think about how to annotate your images and compute resources because if you like build a lot of images, maybe for different targets, then what most people end up is like different tags or names for images for like the AMD or the Intel one or the Sky Lake and Ice Lake, and that's going to be a frustrating exercise. So we think we need to come up with some ways of standardizing our annotation and labels for our images and also annotations and labels for the nodes because what you can do with this and this is an example from an OCI hook is like say you have an Intel MPI or Mvapage in this case, Mvapage container, you can control with this little knobs here how this or which hook was used to map in which MPI into your container, so say that the container wants Mvapage, then if the annotation is set correctly, then it will automatically get the Mvapage library mapped into the container. If it's not, then it's something else. And this allows the runtime without actually looking deep into the container just by looking at the annotations, it can define or figure out what to do with this container to make it work. And also if it's in the registry, then you can also look at the image up in the registry and also maybe figure out whether it's able to run on your system and you don't need to download the 30 gigabyte image and then figure out, damn, it's not working because if I would have known before, then that would be great. Another thing I said that, let me move this up again, I said that we, I said Kubernetes is maybe something for more advanced ones or the Kubernetes natives, but what Kubernetes has is this feature, a node feature discovery, plug-in extensions that figure out what actual instructions are able on the CPU and then you can annotate your container and the scheduler will figure out, okay, this container needs ADX, this host has ADX, so I can schedule it there. And those are the two, also two of a couple of choices in the different ecosystem layers where this comes to play because it can be a runtime decision, it can be a scheduler decision, it can be done in the distribution phase, a lot of places where this annotation and labels are going to be useful. So let's make sure we think about that. Okay, and yeah, do connect with the community, I mean you already do because you are visiting the HPC Dev Room at FOSDEM, so summed up, but later this year we have ISC and in November we have SC, at ISC we have a tutorial that's headed by Andrew and Shane and Carlos, so please make sure if you are the ISC that you are considering going to the tutorial with this on Sunday, I think, or you go to the workshop which is submitted and hopefully accepted to be done in June and June 2, like the Thursday, which will be like a lot of people talking about all different aspects of containers in short, in short, lightning talks and at the end we have a panel discussion in each segment, so run, dis-rebuild, that's going to be fun, really looking forward and Hamburg is a nice city anyways. And SC22 is in November, I'm sure we will have the canopy workshop again with Shane and Andrew and all, so make sure to attend those. And on the top two links to a container workshop from AWS, which goes through like Shane Trude and RunC and shows how different container runtimes work and the other one is the tutorial workshop. Okay, I think that's all. Thanks everyone and looking forward to the Q&A.
This short talk will disect the container ecosystem for HPC in four segments and discusses what to look out for, what is already settled and how to navigate containers in 2022.
10.5446/56828 (DOI)
Good afternoon everyone. Today I will be talking about porting signal processing algorithm to QPy for precision measurement. First I would like to thank all the people associated with this project at CERN. This is the outline of my talk. First I would like to talk about frequency scanning interferometry system device for precision measurement. Next I would like to highlight about signal processing algorithm being used in this system. Then I would like to highlight about few of the algorithms and how they are ported to QPy and in the end my final thoughts about this project. Frequency scanning interferometry system is a distance measurement technique which can measure absolute distance up to micrometer precision between the laser source and the target which is a reflective surface or a retro reflecting a surface or glass balls. This technique is used in metrology and the system at CERN is one of the first kind which will be used for accelerators and cryogenics. It will be used for monitoring components inside the cryostat and provide precision alignment under harsh conditions like high radiation, low temperature and ultra high vacuum. On the left you can see photo of cryostat mounted with laser source and the reflective targets are actually inside. FSI is based on Michelson interferometry principle where source of laser that is a fixed laser source is divided into two arms by using a beam splitter. So one goes to the target that is the retro reflector and one goes to the reference mirror. So at the end that is at photo detector we obtain the interference of both the signal. The interference signal is represented by this equation where A is the magnitude of the signal and T tau determines the time delay between the signal and alpha is the sweep rate of the laser. So instead of using a fixed laser source we use a sweeping laser source in which here in this other system the fiber tip actually acts as a reference mirror and it reflects back 4% of the light and rest of the 96% of the light is reflected back from the target. So based on the number of cycles of the signal which we receive at the photo detection module we can actually determine the distance of the target. You can check out this presentation to know more details about frequency scanning interferometry system. But we needed multiple target points to compute the distance for huge components and to align there. So for this the interference signal can be depicted by this equation when it is received at the photo detector end. To deal with the multiple points Fourier analysis is used where FFT of the measured signal is calculated to obtain the corresponding beat frequency for each of the different distances. And so by knowing the sweep rate and different frequencies the different beat frequencies we can actually calculate the distance for multiple targets as well. But it is not easy to get the sweep rate of the laser being used. So we use a reference interferometer system for which we already know the length L and the laser is passed through the HEN gas cell where we observe the absorption of the peak through the spectrum of the gas cell obtained to get the actual value of alpha. So by having a reference interferometer system of length L and the unknown distances are acquired through the measurement channels we can actually relate these two formulas to find the distance for the unknown distance for different targets. So this is the fine actual system since we have to deal with thousands of measurement channels for different target points we use multiple photo detection modules are used inside a measurement chassis which actually acquire the analog signal with the help of ADC it is converted into digital signal and then it is transferred to the server side where we have GPU and it take care of all the massive calculation and post processing required. Now I would like to talk about the algorithm and the routines and the signal processing algorithms involved in the post processing of a post processing to obtain the final distance. So here you can see how the signal is acquired in the different stages and here the part where the GPU is responsible for doing all the calculation and where actual spectral and actual signal processing and the analysis happen. So the algorithm is basically divided into three main steps first is the data linearization where we pass the reference signal through a Butterworth filter high pass filter and we calculate the instantaneous frequency and the phase information which is used in the later two stages by gas cell and the measurement cell to linearize them. The second step is to process the gas cell data to obtain the sweep rate and last we need to do fast Fourier transform of the measurement cell to actually find the beat frequencies and subsequently calculate distance based on the alpha and the different beat frequencies in the FFT. So this is the complete view of signal processing algorithm involved. So how does our raw data looks like? So this is a 2.5 million simple points which were obtained at 100 megahertz and all these needs to undergo the processing in GPU to obtain the final distance. Now we know our algorithm and required signal analysis so we will be dealing with large set of data and for this let's see how Qpy maps to this picture and how it helps in speeding up the process. First I would like to highlight about Qpy. It is an open source library written in Python to provide high performance computation on GPU. It uses the underlying architecture of GPU and lot of CUDA libraries are already being used. So for example you can see for doing the linear algebra it uses Q plus for doing computation on sparse matrix and there is Q sparse library already present and similarly we can do more things like multi GPU data transfer and even create our own customized kernel. Along with this it provides a drop in replacement for NumPy and hence it is very easy to start for proof of concept and validation on GPU. It is an open source as I previously mentioned and it is distributed under MIT license and it's very easy to start and scale and also to develop customized kernel and run them using just in time compiler. So what does Qpy provides for signal processing since we already have a massive library which is used on CPUs. So along with a good coverage for NumPy there are scipy routines which can be adopted out of the box. For example the discrete Fourier transform for doing the operations related to linear algebra like linear decomposition or calculating eigenvalues. Then there are also different filters available for doing image processing and for calculation of sparse matrices as well. One of the interesting projects to look forward which uses Qpy and signal processing is Q signal but since it is a very new a new project so it will be very interesting to look forward to it and there is already a very interesting developments associated with this project. So just a few considerations that I observe or I thought of when porting any algorithm to GPU. First of all check the data format because Qpy provides with floating point precision that is float 64 and float 32 type of data. So it is wise to choose your data accordingly. Most of the time consuming or cycle consuming process is copying data from CPU RAM to GPU global memory or shared memory and this is one of the places where the computations can be reduced massively and it's always better to avoid the recursive functions on GPUs and GPU is more effective if you have to deal with computation for large data set and have have possibility for data and task parallelism. So now I would like to talk about few of the routines are the functions from scipy which are not present in Qpy and how they are adapted for the how they're adopted and how their performance is measured. So the first is Butterworth filter and signal filter and signal processing we use filters to allow specific range of signal. So Butterworth filter is basically a monotonic filter which allows minimum ripple in the passband and the stopband and it is represented by this transfer function and where cutoff frequency actually decides the range in which you want to allow your signal or not to allow your signal and here on the right you can see a low pass filter where the green line depicts the cutoff frequency so it passes all the signal till this point and it tries to stop rest of the signal. So here we are using Butterworth filter for reference signal and here it is you being used as a high pass filter to allow components all out to allow all the sample points above 100 kilohertz. So the algorithm is pretty much similar to scipy or to say it's more like a matlab style. So for creating the Butterworth digital filter we have to first create the analog prototype. So the filter is represented with zeros poles and gains. So first we design the analog filter based on based on order and cutoff frequency mentioned then we can actually map analog filters amplitude directly for digital filter but for frequencies we need to go through frequency warping and then since we are designing a high pass filter the slow pass analog prototype needs to be converted into a high pass prototype first and by using the bilinear transformation we can actually get the digital filter represented in forms of zeros poles and the gain and then you can convert into b into transfer function that is the b by a form. So this is how this function is called and it is pretty much exactly similar like scipy calling and to use this to apply this filter on a data set you can apply using l filter but since it's a recursive filter and it's in the recursive form so it is not adopted for GPU so the best way is to use the FFT since we have a very fast computation available for fast Fourier transform on GPU so we can apply this filter coefficients on a impulse response and do element wise multiplication with our actual real data and obtain the filtered output by using the inverse transform. Next I would like to talk about Hilbert transform so for any real signal if you want to want to know the instantaneous frequency or the amplitude Hilbert transform is used so to obtain the Hilbert transform the real signal is shifted to 90 degree and it is very easy to obtain it by using a fast Fourier transform and the analytic signal is actually the combination of real signal and the imaginary part which is basically the Hilbert transform and with this we can actually calculate the instantaneous frequency so the idea behind calculating the Hilbert transform is to first get the fast Fourier transform of the real signal and then all these Fourier coefficients have to be shifted to 90 degree phase shift and then do the element wise multiplication with the actual signal and by obtaining the inverse Fourier transform we get the analytic signal. So here Hilbert transform is used after filtering the reference cell to obtain the parameters which can be used for interpolation of the gas cell and the measurement cell so this is how the Hilbert transform has been applied on reference cell and we obtained the time analysis for this transform since we are using a fast Fourier transform and there is a very fast implementation on GPU with the CUDA so it drastically reduces the time and here you can see the phase shift in the reference cell like the orange one depicts actually the analytic signal obtained corresponding to the reference cell not only the time signal but we can actually check more like how our FFT kernel is exactly behaving by the time timeline analysis using NVIDIA N-SYSTEMS and this actually depicts some of the interesting information like how many threads are being created, how much memory pool is being used and how much exact time is being taken by this kernel. Next I would like to talk about Savitski Goli filter so it is basically a smoothing filter it keeps the signal our tries to retain the signal in its original form and removes the noise so it is designed based on the window that is the number of segment of the data you want to smooth or you want to calculate at a time and for all these points we obtain polynomial coefficients and it is calculated based on the order you provide so these polynomial coefficients are calculated using least square fit but if the signal is equally spaced we can calculate this least square fit and we can find analytical solution or a true solution as we say and use this as a convolution coefficient and convolve with the actual signal to get the smooth data. So here you see one of the implementation on one of the noisy signal and how the shape is mostly retained after filtering through Savitski Goli filter. So as I explained how we can do this and it is not exactly same as the sci-pi one but the idea is same that to calculate first the coefficients and then convolve with the actual signal. So we are using this FIR filter on gas cell data to reduce the noise and to reduce the noise in the signal so this is the spectrum of the gas cell data which is used to identify the alpha and this is the filtered output after applying the gas after applying the filter and here you can actually see the zoomed out view of the signal and how it is more smooth and the noise is reduced. So this is the time analysis of the filter and it is calculated on GPU and CPU respectively and for CPU the actual implementation available in sci-pi was used. One of the interesting facts is when we implementation through NVIDIA inside systems or by checking the kernel a different CUDA kernel statistics in the command line you can actually observe the areas where where a lot of memory copying is happening and check in the code where it can be avoided. Next after obtaining the filtered gas cell data it actually we actually need to find the peaks after a certain threshold but since there is no actual implementation to find the peaks in QPY yet so on GPU it was developed considering that a peak has it can be identified just by knowing the neighbors but the performance is not that impressive. So here are some of the more routines which can be used which are used some of the NumPy routines which were applied on the reference cell which is actually which consists of 2.5 million data points and here you can see how drastically the speed has improved with QPY and it can be more better as well. So one of the most used computational technique is fast Fourier transform in signal processing and thanks to the QPY library there is an implementation of FFT which is actually quite fast and it provides a flexibility to plan out the FFT as well and so here in this analysis we apply fast Fourier transform on the measured signal measurement cell to obtain the beat frequencies which are later used to get the final distance with the help of alpha and here you can see the FFT on GPU is very fast and you can also view how much time is being taken for each of the FFT kernel by knowing the kernel statistics and also what is happening parallelly inside the GPU at the same time. So my final thoughts about using QPY and signal processing. So QPY is a very good library to start with and it also gives the idea whether your algorithm is good to be used on GPU and how well you can use the GPU to offer large set of data. Since we know performance improving performance is like a continuous learning process so more benchmarking and more test analysis are needed to ultimately increase the performance and there is also ID to use custom kernels to improve the performance of drent algorithms and if this is useful I will be happy to upstream some of the developments to QPY repository and in the end I would like to mention about one of the famous code from Winston Churchill and and actually yes this is this is actually the beginning of doing signal processing on GPU there has been already a lot of work going on and it's a very interesting area to look forward to as well. So thank you for your attention and now I am open for questions and any feedback anything is welcome thank you. And thanks for being here at Fozdem in the HPC dev room to answer questions without the talk. I'll start with one from Jeffrey who says isn't this Mickelson interferometry a similar system to that used to detect gravity waves? Yes I would like to answer that since actually I'm not a physicist and expert with the gravity waves but I have heard that it is actually used for detecting the gravity waves as well and in our application it is more associated with accelerators and just for identifying the distance of components inside a cryogenic so it's more associated with the accelerators. Cool the QPY software that you've built all of this measurement system on top of is open sourced with the MIT license is the FSI software that you've developed on top is soon open sourcing that? Yes it's completely available on GitLab of Sun and later the only part of this project including the hardware it's available on open hardware repository so even the design or photo detection module to the card used for acquisition it's open sourced and then since it is our first prototype so this code will be also available in open hardware repository and also you can access through the GitLab link that I mentioned so it's completely open sourced. So there's a lot of low-level signal processing routines in QPY is there anything you think that's missing is there any of the stuff that you've developed do you think orps to be in QPY for everyone? Yes actually that's why I explicitly mentioned about those three algorithms that was butter filter and Hilbert transform and Savitski Goli filter so these are actually missing currently in QPY library and I will definitely want to upstream it in the QPY so that anyone else who wants to use it can use it and also improve the performance as well and the another thing to check out is Q signal project as well it is also very initial one where they are developing signal processing algorithms and they are also using QPY so it's a very good approach as well to adopt these filters for GPU so I'll definitely try to upstream it in the mainline QPY repository. Have you talked to the QPY developers? Not yet. So you said that this is a prototype system how long has this been in development? So it's been like a year I can say and it is targeted to be deployed for high luminosity that is when the accelerators restart in 2025 not the next year but after the another stop so it's meant for like after 2025 like completely deployed in the production level. So you've got examples of the performance on CPU and GPU and did you start with a CPU only implementation? Yes actually first we started with the CPU only implementation but for us the deadline or the higher like stop like it should be done within one second and at that moment we have to deal with thousands of channel at the moment and this actually was taking a lot of time in CPU so that's why we went for a GPU for like making this computation faster and it's also easy to scale as well so that's why we adopted for GPUs. Cool are there any tests to make sure that the GPU implementation gets the same answer within some level of precision as the CPU implementation? For the like initial phases we just compare what we are getting as an output from both the systems like just comparing it with the values we are getting and yeah so that was kind of a benchmarking we did for this system so later we are looking for more number of channels like going up to thousands of them so that will be a more faster test for this one. To understand like GPU is actually efficient for this kind of system. One last question before the thing cuts off Kenneth asks about AMD GPUs what would you do? Yes yes so first because we wanted to like prototype this and get to know like the idea we are thinking about the...
At European Organization for Nuclear Research(CERN), for the alignment of large superconducting magnets and cryogenics, an interferometry based system is being devised to identify the position of their elements. This technique uses interferometry principle and uses sweeping laser to identify the distance of multiple points using Fourier Analysis. The data acquired from photo-detection module, received after a sweep of laser source, needs to undergo sophisticated post processing to obtain the final results. The system must monitor position of a large number of elements every second. Dealing with 1000s of target points in less than 1 second required time-optimized and precise calculation. Thus, GPU was employed to provide faster and precise results. This required to use signal processing algorithms like: Butterworth Filter, Hilbert Transform, Savitzky-Golay smoothing Filter in GPU. This talk will cover steps involved in adopting signal processing algorithm to GPU to achieve better performance and understand the effects of parallelism achieved. For the initial development, CuPy library for NVIDIA GPUs is used and later moved to implementation in C. CuPy provides wrapper for most of the CUDA toolkit in Python. We will also provide highlight about performance metrics with respect to increase in the data size and possible optimizations of its processing.
10.5446/56829 (DOI)
Ο electricalDC command στο режим inverter Εγώ εσύ, το σωστό μου είναι ο Γιος Μαρκόμμανόλισ. Είμαι ο κερδίτης της HPC-Σαντις, στήσα στη CIC Center for Science. Είμαι εξατεί να ήθελα να παίζω κάποτε δημιουργία για να χρησιμοποιήσουμε μπρογραμμές, και το Roadsmap. Πριν ξεκινά, να μιλήσω πολύ σκοτή, τι είναι Loomi, το Supercomputer, που θα σημαίνει να μην υποσχεθεί σε Φιλλαντ. Για περισσότερες διεθνές, δούμε την τελευταία μου τελευταία. Θα μιλήσω ότι η ασφαλήτηση αυτής το μοτοσχεδόν είναι για τα AMD GPUs, που θα έχουν μόνο μόνο 1.5xFLOP AMD Instinct GPUs. Και τα GPUs, παρακολουθούνται για το ασφαλήτημα, και είναι ένας άλλος, που θα προσπαθεί σε κάποιες παιδίες και επίπεδες, και θα έχουν κάποια σύστημα, αλλά θα ασφαλήθω να εμπροστάξω την επόμενη. Τώρα, αυτό είναι ένα μοτοσχεδόν της αρχιτεξιότητας του Mi100. Αυτή δεν είναι η GPU της Loomi, είναι η GPU που έχουμε αξιόδησης. Υπάρχει κάποια ασφαλήτηση, αγγεία, χαρδουρία, και πραγματικά, πραγματικά, και πραγματικά, πραγματικά, πραγματικά, πραγματικά. Θεν είμαστεbour间θοσιαζοσιαζοgunογραφό 밀γικικέ Alabama violations. Υπήρχα στις αξίωσες, σ' αγγεία,σαίδε υ��ρρόιδ Scary wacky, με 16 ability to play etc. Έχ Elvis picked up a lot of them, but it's interesting. Και ένα στιγμή, μία εξεκουσία για HIP. Μπορείτε να βρεις πιο πιο πιο πιο τελειώσεις στην τελευταία προσπαθή. HIP έχει εργήνωση της εξεκουσίας για την προσπαθή, συμβουλήθηκε από MD, μπορεί να εξεκουσεί σε όλες οι πλαντωρίες, AMD και Nvidia. Πολλές γνώσεις, καλύτερα, υποστηρίζουν σε HIP. Υπάρχει μικρή αυξηγία, νέες προσπαθές ή μπροστά από CUDA, που μπορεί να εξεκουσεί στις HIP. Ο υποστηρίζειος του HIP είναι χρησιμοποιημένη με την αυξηγία HIP. Η CUDA-μαλογη γίνεται HIP-μαλογης, etc. Μπορείτε να βρεις HIP σε αυτό το βιντεί. Τώρα, ένα πιο πιο πιο πιο στιγμή για την δημιουργία με τη μητέρα. Το κόβδο είναι CUBLAS, ήταν να δημιουργήσει σε HIP-BLAS, για να επίπεδει με την δημιουργία μετά τη δεύτερη προσπαθή. Στην μικρό παράδειγμα είναι η δημιουργία με τη μητέρα, και η δημιουργία είναι το 2000 στις, και η δημιουργία είναι η δημιουργία. Τα CUDA-μαλογης ήταν να δημιουργήσει σε HIP-μαλογης. Τι βλέπεις εδώ, το ίδιο είναι το γεγαφλόπι, και το V100 έτσι έτσι έτσι έτσι 12-13 τεραφλόπι. Και εδώ το μητέρα έτσι έτσι έτσι έτσι έτσι 22 τεραφλόπι. Τη κλειπ του Καιρκολρα conjunto αλλά δενά. You can't solve the Xu's problem.あと加 γυρίμενα. Αν κοκσῶψε με είναι Μεου γεκαkich ηλέ저 οwali λίξη by 100 was closed to 70 seconds.And when I indeed Fื่χνιντκάδυ την νέα για γεροστηUNGή comment at humanity was close to 95 secas something like that. Um, so your lumer哈δεύ rainbow sworont performance DEVahi 100. Τότε εγγραφίσαμε την κόπτα και εγγραφίσαμε την Rochem Change Log. Στο Rochem 4.1 they decided to use 1000 threads per block instead of 256. Όταν χα ung curve efforts were here we might have a way of payer profit and pay damage to some components used to make this machine work. Now we will look within the expectations of thiskomendatore. Πάππλστρυμ είναι ένα μορφόρο μπαντ-βετσμάκ από το Ευρωπαϊκό Βριστολ, είναι ένα στιγμό να είναι μπαντ-βετσμάκ. Έχει 5 κέρνες, αδ, και μπορείτε να δείτε εδώ, σαν την φορμολά, μεταπλή, κόπη, τραίαδ και τότ. Τότ είναι ενδιαφέρον, γιατί είναι βασικές λιξίες, και η τραίαδ έχει also similar performance για άλλες κέρνες, θα μιλήσω λίγο λίγο λιξίες. Ξεκίνω να πω ότι χρησιμοποιήσω πιο πολύ αυτές οι μπαντ-βετσμάκες σε κάποιες κέσες. Πατ'επροσφήθανε την OpenEP performance στο μπαντ-βετσμάκ για MIMO100, το οριστονικό κόπημα είναι εδώ, στην OpenEP Target Teams, συμβείται παράλληλα για σύμπτη, το target να έχει το GPU, να χρησιμοποιήσει σύμπτη, συμβείται το δουλειάδ και να κάνει παράλληλα. Τώρα, το SIMD, αν χρησιμοποιήσεις το AMD BEAR Metal System, στις το LLVM, πιο 12, δεν κάνει κάτι, είναι εδώ, αν έχουμε ή δεν χρειάζεται, δεν χρειάζεται, αλλά αν χρησιμοποιήσεις το Cray Compiler, είναι μαντατορία, θα δεις έναν παιδί, ότι δεν θα είναι καλύτερο, λοιπόν χρησιμοποιήσεις SIMD. Τώρα, όπως ξέρω, ότι θέλω 256 θρατζιούς πιστόντας, και δυο χρειάζοντας από τα κοπιτετή, για να έχουν να κομμάσουν τα λάδες και όλα, ξέρω και να χρησιμοποιήσω το θρατζιού και να χρησιμοποιήσω 256 και 240. Τώρα, για το dot cannon, γιατί ήταν ένα τραγιλό ειναιρό, που εξεύσε με 720 χρειάζοντας, οι καλύτεροι παιχνίδες. Λοιπόν, αυτό είναι ένα τρόπο, πώς να χρησιμοποιήσεις μανιουλία, το παιχνίδι να ανοιγήσει για να ανοιγήσει ένα βίντεο, ανοίγηOUT, γιατι ο οικονομ trickω ορ hottest slide, καιution μες ciao Iímac Οι μοδελς είναι κούτα, η αγγεία, η αγγεία και η σύγχρονα, και θα χρησιμοποιήσουμε κούτα και η αγγεία. Θα χρησιμοποιήσουμε 3 τύπες εξεπέριμματα, συγχρονομή με το γλώμο εξαλίδιο, η σύγχρονα, η δύο σύγχρονα και η μυαλή εξαλίδα, οι διδίσεις και οι διδίσεις. Οι εξαλίδιες θα μπορούν να είναι προσπαθεί με πόνος μόνος και η σύγχρονα της Bellmax είναι σε αυτήν την γητ-χαπ. Οι γητ-χαπ είναι οι γητ-χαπ και βλέπουμε εδώ το V100 και το V100, και η δύο σύγχρονα, η δύο σύγχρονα και η δύο σύγχρονα. Βλέπουμε ότι από δύο σύγχρονα, κλείψουμε 7.5 θεραφλόπες, η δύο σύγχρονα είναι κλείψη, κλείψουμε το W40-15, καιστε reported on half, ότι ότανειτο που αναθύει εκτιειμένη, Έχες κλείψουμε το W-100 εκτιειμέ matinme, και φυσικά για τα άλλα. Και τώρα, μεταξύ με το MI 100, που επίσης είναι ένας γεννότητας πριν το Λουμιντ-GPU, και τι βλέπουμε εδώ. Η διεύτερη προστασία είναι σχεδιά από 10 τεραφλόπες. Η σχεδιά προστασία είναι σχεδιά από 22 τεραφλόπες, και είναι καλύτερη από το A100, αλλά η πρόστασία είναι σχεδιά, είναι σχεδιά από 43-44 τεραφλόπες, και είναι σχεδιά από το A100. Οι διεύτερη προστασία δεν έγινε πολύ σημαντικά στο A100. Τώρα, προγραμμότητας. Πολλογήσαμε με το εξέξεσμα, ούτε οι προγραμμότητας, το MI 100, το HIP, βλέπουμε πιο φλότημα, το HIP, το COCOS, το Αλπακά. Έχω ένα πιστ-μαξ για όλους, και θα προσπαθώ για όλους, αντί το Αλπακά. Τώρα, πιστεύουμε για το HIP. Είμαστε χρησιμοποιώντας το HIP, να υποσχεθεί το A-M-D-GPU. Σύγκληση C++ σύγκλησης για το Heterogeneous Programming, για την εξαιρετική εξέξεση. Συγγεννή προγραμμή με τα πλήματα και τα λαμβόντα, μεγάλο πρωμαντιοκοκλή, νερικο-αργον, κοτ-πλή, παρτασία, κοντάκτη, σπίτι, σύμφωνα, σύστημα. Σύγκληση 2020, η οποία εξεσφαλήθηκε τελευταία. Έχει πολλή τερμινότητα, μονοδοσχεία, μηχανή, μπαφέ, εξέσσορες, μπαφέ, δεύτερα, κομμάτια, κομμάτια, να σημανήσει την εξαιρετική εξαιρετική για τα GPU, εξέντερα. Hipsyκη σαπορτήσει το CPU, το A-M-D, το Nvidia GPU, και το Intel GPU εξεπαιριμένταλλο. Τώρα να δούμε κάτι σημαντικό, ότι το Nvidia GPU σαπορτήσει τώρα, έχει έξωσε το εξαιρετικό εξαιρετικό, ένα τελειό που εξεχθεί με Hipsyκη, μπορεί να εξεχθεί με το Nvidia σύστημα, μετά να έχει όλος HIP. Οι εξαιρετικές είναι ότι Hipsyκη δεν έχει δημιουργία με την εξαιρετική εξαιρετική εξαιρετική, αλλά είναι πως ξεχθεί το όνομα. Τι σημαντικό που δημιουργείτε, με το URAHPC σύστημα, κάτι που έκανε ο Λουμι με Hipsyκη, να δείξει το name. Η κόκκληση, ο Λουμι, από την σαπόρτα του Λεονάρδο, πρέπει να δημιουργεί σε εξαιρετικό εξαιρετικό, αλλά και επειδή από την σύγχρη. Κόκκληση KOKOX, εξαιρετικό εξαιρετικό πρόγραμμα σε C++, για να να εξεχθεί μία εξαιρετική εξαιρετική, δημιουργεί με το HIPC πλατφόρμος. Είναι να δημιουργεί αυσιαστικά για τα διαστασία για τα χαριακή χρησιμοποιή, από το HIPC, πολλές τερμινότητες, το βιβλίο, το εξαιρετικό εξαιρετικό, τα σύγχρη, τα θέα, το μηχανό, το μηχανό, τα μηχανό, τα μηχανό, τα πατράνια, τα πατράνια για τα παραλφόρια, τα ριδάξια, τα σκάλα, etc. Και τα εξαιρετικότητα, τα στιγμή, τα δυναμικά, τα θέα, etc. Σαπόρτω το CPU, το MD, και τα GPU, τα τελ-κ-ν-λ, etc. Αυτό είναι το link, είναι καλύτερο με πολλές τετωριές online. Ας συγχωρήσουμε για Αλπακά. Είναι ένα επισχέροδο λαμπράγμα για παραλικαίωση αξιλίδιων, και είναι ένα σχεδροδοσογικό, C++14 επισχέροδο, για αγιεργοδοσογικοδεύο. Είναι αγιεργοδοσογικοδεύο, είναι αγιεργοδοσογικοδεύο, σε Γερμανία. Έχει ένα κοτα-τερμινόδι, γκρύτ, μπλοκ, θραίδ, και ένα ελαιμινόδι, για να κάνουμε τα δευκτηρίζηση, να μπορείτε να κοτρώνει όλα. Η πλατφόρη αποφασίθηκε, σε ένα σχεδροδοσογικό επισχέροδο. Εύκολο να βρει κοτα-κοτ, με το κουπλα, η επισχέροδο είναι πολύ σχεδροδοσογική, είναι πολύ εύκολο, και με ένας τεμινόδι, Q, μπλαφές, και δευκτηρίζηση, να βρει κάτι, σχεδροδοσογικό, κοτω, τππ, απελευθέρωση, και εσένας, και δευκτηρίζηση. Είχατε συμβουλίδει με τους δευκολογιστές, και θα έχω βρει μπλασσίθη, σε Αλπακά, αλλά οι δημιουργές είναι σε ένα κομμάτι να το παρακολουθήσουμε. Τώρα, δεξάμε για μπλασσίθηση. Λοιπόν, εδώ βλέπεις, δεξάμε, οι δευκολογιστές τρίανες είναι για τα τραγία, και οι δευκολογιστές είναι για το δότ-κέρνι. Και οι δευκολογιστές είναι καλόνι, ή μπλασσίθηση, κοκοσ, και ανοίχιση της βιντής. Τώρα, αυτό είναι το V100, A100, και το AMD, και αυτό που δείτε εδώ, είναι για ένα βίντεο, είναι σχεδροφόρο, το V100, το V100, το A100 είναι κλειδιόνι, 1.4 τρπ. και το AMI 100 είναι κλειδιόνι, για ένα τρπ. Ως λεπτά, τώρα, θα πω, ότι το βίντεο, το A100 είναι πολύ κλειδιόνι, για το μπλασσίθηση, και σχεδροφόρο, το V100. Θα πω, ότι θα χρησιμοποιήσω, το κοκοσ, το μπλασσίθηση της βιντής, και δεν είναι οπτημότητα για το GPU, γιατί θα πω, για δύο λεπτά, για την καρνένα, για το τοκοσ, δεν είναι καλό, αλλά δεν είναι οπτημότητα, σχεδροφόρο, για το άλλο, να μπορώ να το χρησιμοποιήσω. Το V100, θα δούμε, ότι η ριδάξη μην παρακολουθεί σχεδροφόρο, στον τραγέντρο, να εξεχθεί το κοκοσ, αλλά το κοκοσ δεν είναι οπτημότητα. Τώρα, για το A100, δε θα δούμε, ότι η χυπσίκλη, η χυπσίκλη είναι λίγο, αλλά για το AMD, τι θα δούμε, είναι ότι θα δούμε, το ίδιο τρέντι, είναι σχεδροφόρο, είναι λίγο, με χυπσίκλη και το κοκοσ, αλλά το OpenMP είναι πολύ σχεδροφόρο. Αυτό είναι κάτι, ότι το OpenMP είναι σχεδροφόρο, γιατί η ριδάξη δεν είναι οπτημότητα. Λοιπόν, θα δούμε, να δούμε την παράγραμμα, την νέα βασίσια, αλλά η ριδάξη είναι κάτι, δεν είναι οπτημότητα. Αν το κοκοσ, έχει πολλές ριδάξεις, το OpenMP δεν είναι η δύο σχεδροφόρο. AMD Instinct MI250X. Λοιπόν, εδώ είμαστε, με ένα Loomi GPU καρδιά, και υπάρχουν πολλές διαφορές. Είναι σχεδροφόρο και διαφορές. Έχει δύο γραφικές, δύο γραφικές, και θα δούμε το επόμενο λάδο, με δύο γραφικές. Κάθε γραφική, έχει 64 γιγαμπάτια του HBM2e, το total 128, λοιπόν, έχουμε δύο, δύο GPUs σε ένα GPU. 26.5, τεραφλός πιγκοσχεδροφόρο, δηλαδή, πιγκοσχεδροφόρο, δηλαδή, είναι λοιπόν, 53 τεραφλόρες. 1.6 τεραμπάτια πιγκοσχεδροφόρο, με σχεδροφόρο, με δύο γραφικές, δηλαδή, 4 GPUs, 3.2 τεραμπάτια, και έχουμε 110 κοπιτ-γυνύσες πιγκοσχεδροφόρο, το total 220. Και οι γυστοί γυναίκονται με 200 γιγαμπάτια πιγκοσχεδροφόρο, πιγκοσχεδροφόρο, πολλία μετα line και η κανένα στην πιγκοσχεδροφόρο της<|ja|><|transcribe|> τον GPUaufν να με Pu� Rhon που ώρες, θεωρη close, pero λέω, πιγκοκδυφόρο, γυναίκονται για τα рос portal. Είναι πλήκο, από όλοι τεραμπάτια της, απ' που λέω, πα sevi τ referenced επριν σε Ta です! Εμ tinab Grale These. είναι ένα β invol Souq island, λοιπόν,σαμεριώło πως αν κρυ 은 κατοικονομία τους θα γνοιγεί που οι enquantoθένοι που πιδ Hobuuλο κ剌λ вал losing или tellbacklege, σε το σαμμι,θαinf breast buffer,ο hade σημα knotlle o γεταλειάμι,α μια αλογή την φχορη говорит για την φωρολογηση στη φτάσηκη διολογισμή προς την ύπη και πού ήθελα ότι child creatas negatives Αυτό είναι τώρα 4 τελευταία, 1, 2, 3, 4, πριν το μυαλό μπροστάχο ήταν 8, αλλά ήταν 4. Αυτό είναι το 1-GCD, είναι αυτό το πιστό, εδώ. Το 2 είναι αυτό το 1. Έτσι, κάποιες τους GCDs έχουν 110 κοπίτων. Έχουν 200 γμ μεταξύ αυτήν την πιστότητα. Έχουν ευθυνητή φαβρική εδώ, που πιστόνται από το γπ-U. Έχουν ευθυνητή, μεταξύ της γλυκότητας για το 2 γμ, και υπάρχουν 2 γμ, αυτό είναι 1 γμ, δεν έχετε δει 2 γμ, είναι 1. Έχουν κάποιοι να πιστόνουν ότι είναι 2 γμ, αλλά, δημιουργείτε, είναι ένας δίβλης, με κάτι που πιστόνουν 2 γμ, αν θέλεις να δεις τώρα. Είναι το τέτοιο, θα πω, το γπ-U είναι το καλύτερο τέρμα. Έτσι, τι είναι εδώ για να χρησιμοποιήσουμε αυτό το δίβλης, και τι πιστόνουμε εδώ. Ενώ, αν έχουμε έναν πιστότητα, τι γίνεται. Έτσι, πώς να χρησιμοποιήσουμε αυτά αυτά. Εκκλείτε την πιστότητα με το gpu-support, και πρέπει να εξηγηθείτε αυτό το βαρειό, να εξηγηθείτε το gpu-support. Δεν είμαι πίσω, αν θα υπάρχει τέτοι, δεν θα το εξηγηθείτε για το δίβλη, αλλά πιστεύω, αν μπορεί να υπάρχει τέτοι, ή δεν. Τώρα, χρησιμοποιήσουμε έναν πιστότητα με το gpu-support. Έτσι, 2 πιστότητα με το gpu-support. Και 8 πιστότητα με το gpu-support, αν θέλεις να χρησιμοποιήσεις 4 πιστότητα. Τι σημαίνει. Εάν η οικογένειά είναι έναν πιστότητα, δεν υπάρχει άλλη σωλίσμα, θα χρησιμοποιήσεις έναν gpu-support. Ενθρώπτι, campo-support, χρησιμοποιούς και άλλης γ implant,ές παραραθεί εδώ με τη δυναακήPRが υπάρχει τλειο-δυνατή. связοι, θέλεις να το κατα secreθ αυτές, αλλά πιστεύει σε την ευκαιρία σας, ευκαιρία. MI250X μπορεί να έχει ένα διεξιωτικό κονταξία σε το ίδιο GPU. Δηλαδή, σαν να υπάρξει πολλές API προσέσεις από GPU από την ευκαιρία. Αυτό που κοινόμενονται είναι να υπάρξει από την ευκαιρία, για να μπορείτε να ρίξετε πολλές API προσέσεις από την ευκαιρία του GPU. Αλλά, πρέπει να μην αυτοκρατήσει την ευκαιρία και όλα αυτά. Δεν είναι like a free ride. Έτσι, πρέπει να πρέπει να μην αυτοκρατήσει αυτή. Αν η ευκαιρία χρειαζει, να χρησιμοποιήσει πολλές API προσέσεις. Ενώ θα χρησιμοποιήσει πολλές API προσέσεις, δεν έχουμε τελειώσει, αλλά ελπίζουμε να εξηγήσουμε. Αυτό που δοκιμάζει, είναι ευκαιρία. Τώρα, OpenCC. GCC θα προσπαθεί OpenCC από το μετογραφικό κονταξί, τώρα το name CME-CTA, αλλά είναι να διεξιωθεί την ευκαιρία. Τι δηλαδή, το υπογραφικό κονταξί δεν θα είναι πραγματικό, αντιμετωπιστικοί με τα άλλα κομπαίνδια. HPE σαπορτήθηκε OpenCC 2.6, πραγματικά 2.7 για Fortran. Αυτή είναι πραγματικά και όλες, but είναι αρκετά. Και θα ανοιχθεί ότι δεν θα σαπορτήσει OpenCC για CC++. Τι δηλαδή, αν έχετε seen in CC++, δεν χρησιμοποιήσεις OpenCC για Loom, το γεννό μέσα. Τώρα, υπάρχει ένα κλακ από Οκριτς. Αυτό είναι για OpenCC, για LLVM, μόνο για C, Fortran & CC++, για πρόβλημα του Φυσικού. Είναι να διαφαρμόνει το OpenCC κόντρο, να ανοίχει την απελευθέση του Βασικού. Αν το κόντρο είναι στην Fortran, μπορούμε να χρησιμοποιήσουμε also a GPU Forta, τα βορμές σου τελείω. Εδώ είναι το κλακ από R&L. Και χρησιμοποιήσω εδώ το κλακ, το κομπαλ από R&L. Και έχω ένα κόμπι φάλλο. Οι εξοδοστασίες δεν θέλουν να σημανούνται για το moment, για να ανοίχει το OpenCC, να ανοίχει κάποια απελευθέση και να ανοίχει. Το οριγιό κόντρο είναι αυτό. Είναι το C, το Parallel Loop, το Reduction, το Private, etc. Και το νέο κόντρο που δεν είχα πει εξέπτω να χρησιμοποιήσει το κλακ, είναι το OMB, το Target Teams, το Map, για να ανοίχει την πρώτη Πραγματική Reduction, και να ανοίχει το Τεχνικό, although είναι κάποια παράγραφη, etc. και το Loop. Οπότε το έφυγε, δεν είναι ευκαιρία, αλλά είναι ακόμα υπέροχο. Για να δείξεις εδώ, κάποιες εξοδοστάσεις, Ευχαριστούμε, αλλά είναι για το V100 only. Τι θα δείξεις εδώ είναι το OpenCC από PGI, είναι ακόμα PGI, Clang 12, αυτό είναι το Publestream, το 5-κλασικοί κέρνεις. Τώρα, OpenCC GCC10, και OpenCC GCOG10. Το OG είναι το διευτεροχείο της V10, είναι το διευτεροχείο της OpenAP, που έχουν άλλες εξοδοστάσεις. Και εδώ μπορείτε να δείτε πώς το OpenCC είναι διευτεροχείο, όταν χρησιμοποιεί το διευτεροχείο της GCC10. Και η κρουσία ήταν το GCC-Σαπορτ. Πώς το OpenAP, αυτό είναι το OpenAP, εξοδοστάσεις το DOT. Το DOT-Ανθρωπή was significant improved, και among all the other cameras, but the DOT performance was significant improved here through the GCC. So they don't only do some performance improvements, but you see they're still behind from other compilers. And now the GPU Fort is a new tool from AMD, that basically you can convert CUDA photo codes or OpenAC photo codes to OpenAPO float or ship. And this way some stuff from Fort and they have to do manually, it goes automatically for you to create interfaces, to call the kernels from C++ files, etc. And there's a quite complicated workflow, but it works for some examples, but still there are some developments. And it's developed as you see here from Dominic and Mazda, from AMD and the team. And you can use AMD OpenAP after to use OpenAP or floating and all the situations. And just to show you a simple example, on the left here I have a code, I have OpenCC, you see in the box the main OpenCC part, and it creates the right part, if DEV original file. And what it says, if DEV GPU Fort, then start calling some GPU Fort, SCC Interregion, copy and other aspects and how to launch the kernel and stuff, but else is the original code. So this part is exactly the same thing. So this is only if you have GPU Fort, but it's not only this, automatically it creates also this part, the extern C routine, okay, to include the kernel, the launch for the kernel and it has many, many options that maybe you don't use in the original code, but it defines it either way. And here is the kernel. Okay, so all of this code is created from basically from the previous slide. And here you can see the kernel house created automatically for you, all this are automatically. Now, I had shown you last year the porting diagram and what's changed here, these things change. Though if you have OpenCC, you have Cray for Fort, a Cluck or Fluck if it's produced in the list and GCC and if Fort uses GPU Fort. Okay, so basically, I said also already about HP, this supports only Fort for OpenCC and the Cluck or Fluck OpenCC is a research project. So I don't know, we have no contract with them, so I don't know what will be released and GCC is still lack performance, but they're still in the game here. And if you use GPU Fort, you can use, if the performance is good, perfect. If no, you can profile and tune OpenAPN and HP calls, improve data transfers and see what else you can achieve. So tuning, use multiple wave fronts per compute unit is important to hide latency in instruction throughput. Tune number of threads per block, as I said, number of teams for OpenAPN floating. And other programming models support these things. Memory coalescing increased bandwidth. Anthronic loops allow compiler to prefetch the data. Small kernel scan calls latency overhead does a workload. Use the local data set memory as a small memory that's really close to the compute unit's close and it has really high bandwidth profile. This could be a bit difficult without proper tools. Conclusions, future work. The code written in C++ and MPI++ OpenAPN is a bit easier to be ported to OpenAPN or floating compared to other approaches. Hipsicle, Go-Cos and Alpaca could be good options considering that the code is in C++. There can be challenges depending on the code and what GPU functionalities are integrated to an application, how new they are also. You'll be required to tune the code for high occupancy, track historical performance, more new copilers. This is what I'm doing, but also you have to do when you use a new compiler. Maybe it can be worse for some things. GCC for OpenCC and OpenAPN for MDGPUs can be tricky to install and have many issues. Attract that. I'm tracking how profiling tools work for MDGPUs. I try to test RockPro, Tau, Scorpio, HPC Toolkit. Also, we have an accepted paper evaluating GPU programming models for the Lumio supercomputer. We'll be presented at Supercopyding Asia in March and we'll show more results with Alpaca also. So that's it for me. Thanks a lot for any question. Thank you. Let's do some live Q and A. Maybe very quickly the question by Chris, do we have any experiences on the time and effort involved in porting existing codes to AMD GPUs? So basically, it depends on what the programming model is. Είναι η Επίκ προσυκλέταση ξανά που θα πάλιضε τσρεπεζό σημαντικά. Τ0φήτια,ήρή ανώνα. Δε θα περίπου από εγώ,όλο τα θέα για Ουρανογέσα. Μπορεί να πηγαίνει και να π fluxι με την Επίκ προστασία καλά, άować την εξακληρώνη λειοκρότητα για ουρανkek votes. Ότιμοποτε下nuσα στις εξά συνάντητας, η Στοχ φυλαγημ<|is|><|transcribe|> ππαίξε daughters, ότι ξεlusion,ские εξοδητές,
During FOSDEM 2021, we presented in the same event the LUMI supercomputer and we discussed about the Open Software Platform for GPU-accelerated Computing by AMD (ROCm) ecosystem, how to port CUDA codes to Heterogeneous Interface for Portability (HIP), and some performance results based on the utilization of NVIDIA V100 GPU. In this talk we assume the audience is familiar with the content of the previous presentation. One year later, we have executed many codes on AMD MI100 GPU, tuned the performance on various codes and benchmarks, utilized and tuned a few programming models such as HIP, OpenMP offloading, Kokkos, and hipSYCL on AMD MI100 and compared their performance additionally with NVIDIA V100 and NVIDIA A100 (including CUDA). Furthermore, a new open source software is released by AMD, called GPUFort, to port Fortran+CUDA/OpenACC codes to Fortran+HIP for AMD GPUs. In this talk we present what we learned through our experience, how we tune the codes for MI100, how we expect to tune them in the future for LUMI GPU, the AMD MI250X, compare the previously mentioned programming models on some kernels across the GPUs, present a performance comparison for single precision benchmark, discuss the updated software roadmap, and a brief update for the porting workflow.
10.5446/56833 (DOI)
you Hello everyone, my name is Fabien and we're going to discuss about the metaverse today. I'm going to start by introducing myself, define what the metaverse is, because otherwise if we start to build or see how it can be built properly without defining what it is, then probably we're not going to talk about the same thing. So it's going to be a definition, obviously their canonical definition. And then I'm going to show you how to build one version of the metaverse, thanks to existing tools. Why do I start, let's say with a non-technical part is because I think honestly starting with why it matters, why it's interesting is more important, otherwise we build really cool stuff, but there is no point. So even worse is dangerous. So that's the little visual that you can see there on the screen. It's a bit of a mess, so I'm going to decompose it. And I'm going to start by introducing myself. So my name is Fabien Benetou. I work at the European Parliament. Here you can see me letting the former president of the European Parliament try some AR glasses and using augmented reality do some facial recognition. So a lot of proof of concepts like this. Testing is the same hardware, but how it could be used and a lot more different prototypes, events, creating things, focusing on virtual and augmented reality on the web. What motivates me to do this, besides the art school new tech, is because I have a bunch of notes and I want to organize them. So you can see some example of documents there and being in a virtual world. Not even in video actually, because you can see the video and someone else being there. So it's social. And then I have my notes on it and a bunch of different examples like contrary to my live bulbs, organizing the documents again, managing your camera anyway. A bunch of different prototypes and proof of concepts. I mentioned the proof of concepts part because that also means I'm not working on production ready software. So my perspective is going to be a lot more about discovery, thinkability, but not necessarily like long term maintenance. I think we're at that stage now anyway for the topic, both of VR or the metaverse that it's still early stage. So that's also why the architecture in my opinion is so important to get right, not just making stuff. And yeah, you might have heard the metaverse term recently, but the first time I not just used it, but participate in a professional meeting was then and that is in 2016 in San Jose, the Samsung HQ. The goal was to define actually one of the core components, which is link traversal. So the WPC for those who don't I imagine most people into JavaScript know what it is, but it's basically defining standards for the browser. So how the browser are going to behave the same way when we ask them to do the same thing. That's very important, especially at least to know what it is and how it works, especially when we have so few browsers in general and especially in the other. So very few browsers that are WebEx are compatible. So knowing who is doing what and also being aware of their motivation, I think it's pretty crucial. But anyway, the point is, I'm not deciding to make a metaverse presentation for first them out of the blue. I've been working on this for more than five years now. So here's my VR. First of all, you don't need VR to be in the metaverse. You can really well without any problem go around in a 3D world connected to another and see people work on solving a problem without one of those fancy headsets. First of all, they are fancy, but they're getting cheaper and cheaper. You have either standalone ones or ones like this that connect to a desktop, a desktop that can run Linux. It's working really well. I'm not saying it's going to be trivial plug and play five minutes setup, but it's really working properly. But again, why in the first place? Okay, you could use a VR headset to join the metaverse. So why? Because you're going to feel there. I know it sounds silly. And if you've never tried it, that's going to be actually my first recommendation is do try it. And you can remain critical about it. Because this is far from perfect. But it works. So that's the amazing part is like you put the headset on and then you feel or you know you're somewhere else. You're physically present here, let's say, working from home, little office. And yet you see a different place and you feel you there. So this is amazing. This is like an amazing opportunity. It's not a requirement once again. But I do recommend you give it a go at least once again, even if you say, well, this is a gadget that is stupid, I'm never going to need it. Perfectly fine. And my perspective on this is also that you don't need every tool all the time. You might need a pen and paper to sketch through. You might need a computer to develop. You might need your headset to visit an architectural model, but you're not going to visit an architectural model through text. You can that's going to be just too challenging and you're going to have different interpretation. The same way you're not going to read a 300 pages in a view headset, you can do that, but I don't recommend it. So then it begs the question like what is your for when is it actually interesting. So I left a couple of examples here. So this is for the toys. It's a game if you want. And you're exploring what it means to manipulate not just one D2D3D but also for the objects in a mathematical space. So I think for this kind of usages, that's really, really interesting. That's really fun because it's I don't know how to do this otherwise. And in that's the kind of, let's say use case where VR really shines is you just you just don't know how to make that without a headset without some manipulation and being in space. The other part is places where you shouldn't be because it's too dangerous or it's too far away. Let's say we're not going to go to Mars, not you, not me, but we can be there through VR and games like this. I'm going to the whole video play, but it's kind of crazy. It is. So you have some pretty scary creatures and places and then you have a mission. You have some game dynamics. So that's another really cool usage is that you're telling a story shaping a moment in time for someone. And again, it works. So the other part is so I said I wouldn't read a 300 pages document, but my own motivation for VR is here I have a small office and I want to have my posters, my notes, because I take a bunch of notes on any kind of support. And I want to be able to organize. So I don't want to read my notes in VR because it's just like the pixel density is too low. And it's just not as pleasant. And maybe I'm so not used to it. Maybe I like to be on my couch and just scribble on my notes. So this I don't want to give up on. But one part that I really like and I don't know if some of you have done it is like toss all the papers I need to read on the floor and then organize it or stick it on a whiteboard or pin them on the wall to get some structure out of it. I think this is really cool. So I've done a lot of prototypes, different ways to do it, a kind of notes city where each of those are actually pages of my wiki. So this is basically impossible to do otherwise, especially if you keep it in sync. So in my opinion, that's number one, you don't need a VR headset to explore the metaverse or build on the metaverse. It is amazing if you do because then you're going to feel there and with a shared space with colleagues or friends or strangers, but you don't need it. And if you do need it, if you do rather decide that you're going to use a VR headset, make it worth your well, make it worth enough time, excitement, enjoyment or find a radically new way to do things that would otherwise be impossible. VR is not necessary. It's just another tool in your toolbox. It's nothing more than that, nothing less than that. The same way you would use a pen or a desktop with a keyboard or a tablet, whatever device or screwdriver, whatever. It is what you make use out of it or so. One yet another tool in your toolbox. It makes the metaverse a lot more there once you're in it. So I'm going to give a practical example of what are the building blocks, let's say, of the metaverse. I'm going to use Mozilla Hubs because it is open source and it's web-based. So that's the JavaScript part. It's using React for the 2D UI and a frame for the 3D and VR elements or components and itself a frame is based on 3DS. So I'm going to start by creating a room, which is going to be a 3D world and as I said before, VR is optional so I meant it. You don't need a VR headset. If you have one, try the ones. It is amazing. If you don't, then you can use a desktop. You can even use your phone or a tablet. So here I'm just navigating around, looking at the 3D world and you can modify that of course because it is on GitHub. The server side is relatively complex but then the client, which allows you to see all this and connect with other people, is much easier to tinker with. And I have a bunch of different prototypes so feel free to ask me what's possible or not happy to clarify. So what of course I can do is connect with another machine to make as if we were there in the same space together because that is one of the points. Entering the room and I'll go navigate where my other little robot is. I'm going in front of it so that we can see the mute quickly. And voilà. We can see each other let's say. So that's a good step. But then okay, just like one 3D world, one on the web across platform. That's cool but that's not the metaverse, is it? Sure. But then how do I make my metaverse? So I think starting with hubs is good because it shows the potential of a full blown experience. Again, across different solutions, phone, headsets, desktops that I'm going to show. But I think it's quite heavy because there are a lot of different moving pieces. So an easier example is this. It's called network A-frame. It's still the same basic component, meaning A-frame with 3GS to do 3D on the web. Here I see two different Avertar but very basic. What's a bit different, let's say, from a traditional immerse or rather network A-frame experience is that I can log in here in the top right corner. So what I can do is I can log in through this. So I'm going to go through this server, immerse.ovh, because I logged in as utopia.at immerse.ovh. Accept. And then it's going to load my avatar. It's a cube so it's not super exciting. But yeah, you can see it working there. And then what I can do here on the other side is, logging as not utopia and the interesting part is it's going to use another server, immerse-first-dem.ovh. So I'm going to log in there and allow it. And you can see my nickname changed. I don't think I had an avatar on this so it kept the very cute, let's say, round person. So that's it. That's the metaverse in my opinion. We can stop here. I'm going to explain or at least decompose a little bit what happened in that session and then how to do this with hubs at the end. So what happened is that I can click on my profile as not utopia among immerse.first-dem.ovh. Whereas if I go on the other profile, I'm on immerse.ovh. No first-dem in there. So I made those two servers a couple of days ago and it took me a couple of hours for each and it's really on affordable servers. I'll put links on how to set those up. But yeah, that's the important part is that I have two different persons or profiles, but they want two different servers and yet they can talk to each other or they could before I moved there. So how does that work? Well, maybe you're familiar with Mastodont or PeerTube. So I have my own PeerTube server and I put a bunch of video including this one, how to set up hubs. And the beauty of it, besides being PeerTube and working, is this aspect of the federation, meaning that I'm on my server with a couple of users, but also I have 35 followers but they are not personal followers. They are instead entire instances, which also pose their own video, with their dedicated topic, dedicated culture, etc. And I don't have a son what they're doing on their server. They don't have a son what I'm doing on my server and that's again the beauty of federation. We have something and we're connected through a protocol, specifically here, ActivityPub. And that's the same thing on eMirrorz, being for network day frame, or for Mozilla hubs, or maybe if you make your own implementation, there are eMirrorz clients that are also available. And then through ActivityPub, you can say I'm doing this, I'm doing that, I'm on that server, here is my friend list, here is my profile list. But that's the beauty of it, is that through this federation, I'm part of a bigger network, but I still define my own rules on my own server. So I have those two different eMirrorz space and then I can see also the, only I can see the log of my different activities, some chat where I had this, who I met, etc. So now the interesting thing is there is this one page here on how to set it up and that it works with Mozilla hubs. So what I can do is, for example, go on my hubs experiment server. I'm already logged in, so it's not going to prompt me just on the previous example for network day frame. Yeah, it's going to check if I want to log in with this, accept and then it already finds my nickname and my avatar. And then what I can do is connect with a private browser window where I connected with another profile. It connects me as, let's say, an eMirrorz or randomly generated nickname, but if I use my eMirrorz login, it attempts to log in and I can see who I am. So I think I had a not utopia and then to prove, let's say, to show the demonstrate the federated login, I'll use it on this other server. And that's also, that's the key part that when you're new to federation might be a bit strange. It's like, oh, you have, I don't know, your Twitter login or you have your Gmail address. You need here to put the domain because maybe you're not using the most famous server. And again, that sounds a bit maybe strange in the first place when you do that. But the most popular is not actually the most interesting. And again, it depends on your own values. So I'm going to try to log in, hopefully, so it redirects to another server. Hopefully it has my yes, because I, it's generated. So I don't remember what it is for these examples. And then I can decide what I want to share, what I don't want to share, giving everything or giving nothing or giving everything. I suggest this one, I'll allow this. And now we change my nickname to not utopia. And then I can join, put an avatar. I don't care for the sound. And voila, we can see each other. I'll try to be here. So you can see on both windows. And here, voila, I have my two avatars seeing each other in this virtual space. So again, it's not convenient to show, but otherwise that works with your headset. Still optional. But in my opinion, that's the most interesting kind of experience you can get. And yes, we can use the chat to say hi. And I have it here and I'll have it also in my e-merc history. So I can close that and I left a mold on my own now to clarify a bit more what happened there. So what happens is, what did a hubs brings this 3d interface e-mers is going to add on top of it some customization. And that customization is going to be this login button is going to be the friend list are going to be whatever new behavior you want to set up. And thanks to activity pub is going to keep the updates and login. So you can do this with network day frame. Another way to build your 3d world with hubs or if you have another one, so frame VR is an alternative. You can also use that. So you already have a network day frame, modular hubs, frame VR, by Vibella, to at least have three different implementation of VR 3d that are compatible with e-mers. So to me, that's the exciting part. e-mers per se give a set of functionalities, hubs is nice, frame VR is nice for other things. It's closed source, but hubs is open source. But the point is they can still communicate to each other and I can communicate with somebody with another profile from another platform. So that's the core aspect to me. That's the core aspect that e-mers is bringing to 3d or VR and that's the core component of what I believe is a proper metaverse, meaning something federated, something you keep on having control on, something that if you want to join servers together, network together, you can. But according to rules and forms of behavior that you believe are just or fair or things like this. A quick word that this also means you need money or you should have a way to pay for your hard work or your services. And that's why also it's interesting that it's not required also, but e-mers propose directly web monetization. So for some of you who are not aware of it, web monetization is, for example, using Coil or other platforms going to give a little bit of money every second you're on the server. And then you can have access to a new feature, let's say other scenes, premium scenes or other avatars. And then you're going to find a way why it's interesting. It's because then you're not going to have to rely on advertising. Advertising per se is not necessarily bad, but if it destroys privacy, the consequences for our society, for the political system is pretty terrible. It's not just about yet another type of soda. It goes much deeper than this is much more riskier than this, I would say. So yeah, that's a set of components that I think make a lot of sense together. WebXR, how you're going to bring 3D content to a headset with a six-defeat freedom moving in space and with your controllers. GLTF for the 3D assets, how you can have an avatar or a backpack, whatever you want to put and then show. And then Immerse relying on Activity Pub in order to have a federated server. So I'm going to summarize this in a minute or so. So what we saw was you can have 3D or 2D or even text as ways to converse on the metaverse. The number of dimensions or if it's using Vio or not, of course, makes a different experience, but it's not the most important. We can have one example of an implementation like a Hubs room. And you can have two rooms on the same server, let's say the Modzilla.com server. But what that means is that we have one server and we have the same person owning it, the same person paying for it. And you can even have two servers. You can have one Modzilla Hubs.modzilla.com or whatever Modzilla.com that are two physical servers. It's still paid by the same person. And when what people usually consider is that, okay, it's technical. But what I'm trying to argue here is that it's different. It's about power so that if you have, you like Modzilla or maybe you don't. And maybe you like some of the rules they set in place, maybe you don't. So what's interesting is if you have, for example, two different Hubs server, one owned by Modzilla, another owned by you or me. And we don't have exactly the same rules. And yet we find a way to connect. We find a way that I can link to your server or you can link to mine. It means going to have different owners and different rules. What's also interesting is you can do that, but with different Hubs or services, you can have Hubs with, for example, Network A-frame as I showed, also frame VR, which is not open source, hence the little lock there. But what's interesting again is when we can do that, go from a server to another with an owner that is different with even an implementation that is different. But instead of having just, let's say, going there, we have some form of data portability. We have some form of bringing or relying on the same, for example, profile. Because the profile can have a list of friends, can have a 3D avatar, can have whatever you can come up with. But basically, it means being able to go from a server to another with different owners, with different rules, and not starting from scratch or being, let's say, someone else. So that's what federation is all about. That federation works in 3D or in VR is actually crucial or I would say more interesting, not, let's say, for the technical aspect, even though it is interesting, but for what some would say, to quote McLuhan, the medium is the message, what does it mean when we have a different medium? Here we're going to, let's say, walk around in VR, but then it's basically a metaphor for social networks, all those different 3D worlds together. So in social network, the message is society. What kind of society do we want to live in, work in, etc. And the structure, the technical structure underlying the metaverse is going to change our society. So that's why I believe using open source, using a federation type of network is so important. It's not just because it's a cool tech, it's because it's going to impact back our society the way we want to live together. So that's why I encourage you to check Activity Pub and the eMERS implementation. That's why I encourage you to try hubs and how you can have different federated server and maybe set up your own server. Maybe try your own implementation of Activity Hubs with another server, but as long as we can connect back together in a way where we don't even need or want to use a central server, then I think that's going to shape back a society that is much more resilient, much more interesting, and for us as individuals, much better than a central actor in the middle that is driven by advertising. So thanks a lot for your time and just let me know if you want to meet on the metaverse, if you want to build yours or if you have any questions. Happy to try to help if I can. Take care.
We keep on hearing about the metaverse but what is it and more importantly, can JS be used to build it? We'll briefly clarify what the metaverse is and give practical examples today with federated virtual reality servers managed by different persons. Behind the buzzword from Facebook/Meta there is a truly interesting concept : connecting virtual worlds! To do so there is no need to be one of the largest advertising company. In fact there are several solutions working today allowing to navigate from 3D or VR web pages and even keep information across, like a profile. During this presentation we will explore the WebXR specifications, in particular links between pages, and test an implementation running today named ImmersSpace based on Mozilla Hubs and ActivityPub.
10.5446/56834 (DOI)
you Hey, I'm Lewis and I like to write code and make experiments with audio and user interfaces. I also go by the name Drone on GitHub. Today I'm going to present some users of web serial API as a prototyping tool for interface development as well as demonstrate some code. All the code is available at the link provided to the GitHub repository. I'm going to assume you have a little bit of familiarity with programming, but I'll try to explain as much as possible. For those unfamiliar with these three words, web, serial and API, I'm going to do a quick overview to catch you up. When I say web, I'm specifically referring to the web browser. So that's Chrome, Firefox, Safari, Edge, but also the underlying system for something like Chrome and Edge, they use the same system called Chromium. Web in web serial API is referring to something in a web browser. For this demo, to follow along, you will need Chrome for desktop. There are instructions to download this from the official website. When I refer to a serial connection, I'm talking about the connection between a device and my computer where both are going to be sending data and not just providing power. When I refer to serial data, I'm talking about the data being sent and received using this serial connection. For example, USB is a standard that defines a bunch of things like the cables, ports and the ways to communicate using serial. So devices with USB can work together. MIDI, Bluetooth, HID like mouse and keyboards and similar are all different kinds of serial systems. For a deeper discussion, I've provided a link to the Wikipedia entry on serial communication. For the web serial API, there's an operating system level interface for passing serial messages and a peripheral device just needs to be connected to this interface and the web browser can get access to the device. API or application programming interface is a general term for a system of allowing a piece of software to communicate with another piece of software. When I write JavaScript, my code needs a way to talk to the web browser and the web browser has a bunch of capabilities that it wants to give my JavaScript code. The browser will, following what's called web standards, implement a JavaScript API for various features that I can call just by running the code within a browser. And by call, I mean it could be a function or a class constructor or a variable. All of these are documented on the MDN documentation portal. Web Serial API is a collection of functions that I can call to prompt the browser to ask for access, then subsequently give my code access to and ability to communicate with the serial devices attached to my computer. The web serial API standard is still a draft and was only recently released in Chromium desktop, so it's only available on Chrome and Edge for desktop. Unfortunately, there's some resistance by Firefox and Safari, so it's highly likely that Chromium will remain the only implementer of web serial. And it's also uncertain whether it will reach Android phones. iOS will also be unlikely to have support, as all iOS-based browsers depend on Safari. For more information about the Chromium project implementing all of these interesting APIs, check out ProjectFugu. As most of you are likely aware, web apps generally have their interface rendered on a screen, buttons, toggles, text inputs, etc, and people primarily interact with these via a real or virtual keyboard and a touchscreen or mouse. These physical peripherals use serial, and as such, we can use the same underlying technology to build a custom physical peripheral to provide a new way of interacting with our web app. This underlying technology is the microcontroller. When it comes to programming, microcontrollers are quite different to building web apps. You usually need special tools, both hardware and software, as well as knowledge of how to do very low level coding, how to make the code run efficiently on such a small, low-powered device, and if you have a circuit attached, how not to wire the circuit so it breaks the microcontroller. Breaking a whole computer is not something people expect when writing a web app, so the idea of breaking hardware when building something might seem very intimidating to new developers. Fortunately, today we have an abundance of solutions to help the whole design and prototyping experience feel much more friendly. The first solution is lower cost. At the moment, there's a gradual race to the bottom with providing a low cost development platform for microcontrollers, part of which is documented in a couple blog posts regarding $1 microcontrollers, and there's even now a 3Cin microcontroller gaining a following. As mentioned, microcontrollers aren't really programmable on their own. They need to be part of some kind of platform so they can connect to other electronics components and receive code updates from the development environment. A few years ago now, due to how complicated this was for novice developers, a handful of Italian engineers released Arduino, a hardware and software development platform for the cheap Atmel chips. This became very popular amongst their target market of creative people and younger programmers, but have definitely made an impact on the entire microcontroller ecosystem. Last year, the Raspberry Pi Foundation released their first microcontroller, the Pico, which is a fraction of the cost of the Arduino and uses the Ancortex M, which is also cheap but very powerful. This is the device I will be using for this demonstration. This brings me to the second solution, the programming tools. Arduino's programming language was derived from work initially created by the Wiring Development Team, which is closely related to the processing programming environment. But over time, interest for alternative languages and platforms developed, and much more popular programming languages like Python and JavaScript were being introduced on microcontrollers. What was required to do so was make special versions and run times for these languages so they could run efficiently on microcontrollers. However, with chips like the M-Cortex or ESP32, there was now much more power to work with, so having a high level language became more plausible. MicroPython and Espruino were some of the earlier iterations of this, with their own boards and development tools as well. But forks and variations of these tools were created, and new run times and development tools are being created regularly for all kinds of new chips. Columa is a very new JavaScript runtime that runs on the RP2040 chip that is on the Raspberry Pi Pico, and the development environment for it runs entirely in the web browser, by utilizing the web serial API. This is the run time and tool I will be using for this demonstration. The third solution is an emulator. Most people might know emulators as things that run PlayStation or Super Nintendo or Game Boy games on your laptop. The idea is that computers today are so powerful, the entire hardware system for these old platforms can be replicated in software, and the games can run with this software just as they run with the hardware. Now, web browsers are so powerful, they can provide the resources to enable emulation of these chips. Today, there's a free web app called Wokwi that allows us to have an emulation of the Raspberry Pi Pico, along with all kinds of additional electronics components, and they are programmable with any run time. So instead of needing to buy and having a Pico to begin prototyping, we can just open a web browser and begin building our device, nothing to download and install and no accounts required. No matter what you do with the thing, there's no risk of breaking any hardware if you wire something incorrectly. So this is where the demonstration begins. With this Wokwi project open, I'll put a potentiometer on pin 26, click the plus button and select the part. Select ground terminal on the left and then click one of the highlighted pins. Then do the same with the VCC terminal on the right, and finally the middle terminal goes to pin 26. Since the potentiometer is attached to this ADC input at pin 26, this is the first thing we can implement and see what kinds of values we get. Get the ADC API, read the pin and create an interval loop of 100 milliseconds to read and print the result. Instead of logging to console, what we actually want to do is send a byte array, that is our serial stream of binary data. Here we can have a reusable array, where the first byte is a start byte, and that lets the app we're talking to know that this marks the beginning of our message, and the subsequent bytes contain our message data, which will be 4 bytes, but I'll get to that in a moment. We can see that this runtime is outputting a standard JavaScript float, which we can then copy into a float32 array. Float32 means each item in the array is a float of 32 bits, each byte has 8 bits, so that's 4 bytes, which is the length of the message. But the function is using this uin8 array type, so the reason for first putting our ADC value in a float32 array is that typed array buffers are interchangeable between different types. Therefore, although we have this particular float type, we can use the same buffer for another uin8 array. The 4 bytes per value in the float32 array can be accessed individually from the uin8 array. The message array and this array sharing the buffer are the same type, the data can be copied directly into the uin8 array used for the message, offset by 1 for the start byte. That's all that's necessary. And as we can see now in the WUQWE console, the output is gibberish, because it's just trying to convert raw binary data into text characters. With the code now ready, the Columna platform includes an IDE we can use to send the code directly to our connected device. The first step does involve downloading a binary, which includes the Columna JavaScript runtime, but this is a standard step to get any runtime onto the Pico controller, and is relatively easy compared to what is required to get other microcontrollers set up. The steps are firstly press the button on the Pico, then connect a USB cable from our computer to the Pico. A USB device will appear on the computer, so download this file and copy it straight to the device. The Pico will reboot a moment later on its own. Now in the Columna IDE, connect the device by clicking this button and selecting the Pico serial device. Then copy this code from WUQWE over to the IDE, click upload, and the same kind of gibberish is visible in the console. Click disconnect, and our Pico is ready to use. Now using the circuit you've designed in WUQWE, set up your components with your Pico device. With the value of the potentiometer now available via the USB serial connection, I'm going to step over to another browser based app to begin making a small app. For doing so, I'm going to use Glitch, which, again without downloading and installing anything or opening an account, allows me to build an entire web app in the browser and save it for later. Unlike other browser based coding tools, it also doesn't run the code in a frame, which means I won't be restricted by the permissions required to access the serial device. The first step is to add a button to tell the browser I want the permission to access the serial connection. Add a listener that will trigger requesting a serial port. When I get a port, I'll pass it over to an async function to do all the reading and handling of the data. As written in the Pico's code, I know the serial port continually emits messages, and these messages appear in the app as a byte stream, which here is called a UIN8 array. The messages may only arrive partially, or grouped together, so this stream will be piped through a transform stream. The transform stream will look for the starter byte, which we decided is 192, and therefore once we have an additional 4 bytes, these will be emitted as this is the message data we are expecting. So every time we receive a message, we will send it to our handler function. From the transformation step, there are 4 bytes per message. As we know from the Pico code, we can use the buffer for these bytes interchangeably between a UIN8 array and a float32 array. Therefore we can create a float32 array for the handler function using this buffer, and extract the first value as the number we need to find the rotational angle for the web app. We now have a web app controlled via a custom peripheral device. I want to give a special shout out to CircuitJS, as a circuit simulator that can be used to help design the circuits that connect to the microcontroller. It's quite powerful, and provides an interface that makes it very easy to build and share circuit designs, again without downloading and installing anything, and no account is required. If there's something that work we can't do, I'm fairly certain CircuitJS can allow you to fill the gaps, and it's much closer to the standard form of circuit simulators and circuit schematics, so it can be a nice stepping stone to the world of electronics engineering. I got into this rapid prototyping adventure, because I enjoy experimenting with interfaces for exploratory musical apps. This first experiment I built was just to test out the web serial API, and use two potentiometers to control a sawtooth wave oscillator, and a square wave oscillator. My second experiment was a challenge, a process I used based on inquiry based learning, to see if I could build something interested using a potentiometer and a button within a week, which as it turns out is the underlying interface for the original Atari Paddle controller, and I built a web app during that week which was inspired by an old Nintendo DS music toy called Electro Plankton. I think there's a lot of potential using these tools. The areas that immediately come to mind are accessible interfaces where custom designs and low manufacturing quantities are required, so mass production is unlikely, unique forms of browser based gaming or using browsers for game prototyping with unique controllers such as simulators and tele-present art, where people have physical art pieces in their home. If you want to keep in touch and stay involved, there's a few places on social media such as the web serial API subreddit, and on GitHub there's the web serial awesome list. Thank you for listening. Hi everybody. Thanks a lot for your presentation. It was pretty interesting. I didn't know this domain. I didn't believe it was possible to do that. We have some questions if you want to answer. I can only hear you a little bit. I'll answer questions if there are any, but I don't see any questions at the moment. We have about 15 minutes to see if anybody has anything to say. If nobody has any questions, I can start singing. Anybody wants to listen. I hope you all enjoyed it. It was pretty fast. I hope people can re-watch it later and pause it. Have you considered use in educational settings? It looks very fitting for that purpose. I haven't really considered me personally using it in educational or anything, other than experimenting with it. I think with something like Wokwi talking to the developer of that, and it's a really good interview with him on the embedded FM podcast, he talks very much about its use as an educational tool. I know that a lot of teachers have used it. Columo as well, I'm pretty sure was designed for use in education. A lot of projects that have come up around Raspberry Pi Pico have been primarily for educational purposes. Me personally, I'm just exploring and trying to see how seamless the whole process can get. I think way more accessible from both a younger learner or novice point of view, while also just the saving of costs and reducing the risk of destroying hardware, which is a big risk when dealing with circuitry stuff. I would happily be involved with educational stuff, but I haven't made any goals to do that. For me personally, I just work on these projects to come up with blog posts and other things to help people implement the same stuff themselves. I guess there is an educational element to it, but it's about experimentation and finding interesting uses. It does insight experimentation, which would be perfect to keep students motivated. I agree. I think that for me personally growing up, being able to just edit. I mentioned inquiry-based learning within the video. I didn't really go into that, but a lot of that comes from the whole Siemle-Pappet Mindstorms concept. I read that book late last year. It's a pretty good book if you're into the early years of using computers as an educational tool. That's where the whole total logo stuff comes from. It's where the name Mindstorms, Logo Mindstorms, got its name from as well. Just about this idea of giving people, especially kids, computers and tools to just experiment with ideas on their own and come to their own conclusions about how mathematical functions and formula work. I think it's seen in a lot of these similar tools, but really it's mostly about access. Being able to just actually start and get into something with very minimal effort and not have to go through all the effort of setting up a development environment and stuff like you get out of the box using Glitch. You don't need to download anything for that. I think it's both of those, looking into education and also looking into how these new technologies and browser technologies fit into that. I think they're both, although separate, they're both really interesting areas to look at. If one asks, do you happen to know if one can use this in Node-RED, having a visual programming language at the ready here would give this much leverage, I think. I'm aware of Node-RED. I haven't really looked into whether this is integrated into it, but there was a platform that I saw recently. What's it called? It's on, actually, while I'm here, I'll plug it again that I just posted into this forum. Also, I posted all the links here that I'm this, sorry, I just lost my place. I was referencing this development environment called Playpiper.com. I don't know how much development it has, but it uses a visual programming language that's like Blockly or Scratch, and it's similar where you get a runtime for your Pico, you download it on two devices. It's actually pretty cool. They even use another web API that gives, it's like a file system access API, which I spoke to the maker of Columa and he hadn't implemented this, but maybe it will. It's this tool that API that allows you to access your file system directly through USB systems. You can open up your USB drive directly. When you download your UF2 file, which is your runtime, you can download it directly onto your Pico device rather than having to download it as a file and open up your file system and drag it over and all that stuff. They've implemented it on this website. If you have a class of students, all they need to do is plug in their Pico and you just couldn't do everything from the browser basically. I think that project is specifically for educational settings with kids. Anyone else have any more questions? Do you hear me now? I'm on clean up. Yeah, so I think there's, I'm trying to think if there's anything else to cover, but yeah, I do encourage people to try it out. I've seen some really interesting projects that I've just been posting every now and then in the subreddit. People are taking existing serial devices. Example, I saw one recently that is USB-based scales and the scales connected to the browser, which means that I'm sure it makes it a lot easier to run, say, like a room that does a lot of device, sorry, a lot of product shipping and whatever. I think this idea of creating, being able to write applications very easily in the browser opens up a lot of commercial opportunities as well. You don't have to write sort of native stuff. You've got a lot of portal software. I'm not sure if this is a question. With PWAs that can even work offline, I see an application for other fields, maintenance applications not needing to install special drivers definitely simplifies for me. Yes, so that's basically like actually segues from what I was talking about. So yes, PWAs will work fine. And yes, I think that if you're an enterprising person, I know this is about FOS, but if you're an enterprising person, you could probably see in this the opportunity to even be able to rapidly build all kinds of applications for different businesses, maybe local businesses engage with the community and integrate with the kind of tools that they need to do that job better, which I think is like exciting prospect. And I mean, that's where a lot of progress in the browser has come from is just continually opening up more commercial opportunities. And but in saying that, you know, keep your code open and share your ideas and stuff. Because there's just, yeah, there's a lot of really interesting implementations out there at the moment of all these things. There's another one, old radio scanner implemented. Another question, any idea of browsers will pick up more hardware communication APIs in the future? I doubt it. So Firefox and Safari, as I said in the talk, they're kind of a bit like hesitant to do that. It's understandable, like you have like your kind of your, you know, this browser environment, you know, running connected to the internet, you can, you know, your modems talk serial, you know, like what if somebody has a modem and then you can connect directly to that modem and people don't always know what they're getting permission. So even though it is permission blocked, like there's arguments for and against it. The Firefox and Safari are kind of against it. And they're pretty staunchly against it. I don't know. But on the other hand, Chrome is the dominant browser. So yeah, Project Fugu is where where Chromium is implementing all of these, all of these different APIs. So there's like Web USB, Web Serial, Web Bluetooth, MIDI, what else is there? MIDI, there's also the Sensor 1, HID, which is kind of similar to USB, except you get access to like normal devices, Web USB, I think only works if your, if your native system doesn't automatically claim the USB device. And then there's GamePad, which exists as well. So like there are a lot of APIs, but it's just, I think like if you're really interested in excited about it, it's a matter of using it and building stuff and experimenting with it and then showing the people that that actually develop this stuff. So people at Mozilla or Apple or whatever, that people are interested in it, that that there is a use for it, that people are excited about it, and that it's worthwhile trying to figure out a way to get it implemented everywhere. So it doesn't just become yet another standard that doesn't get maintained and dropped, which again, you'd also, that it's also a big part of encouraging Chrome and the Chromium developers to keep engaged and keep it maintained and keep pushing for it. But yeah, anyway, thanks for listening. And I really appreciate all the questions. And if you have any more questions, just reach out in the room. And I'm looking forward to seeing all the rest of your talks. And yeah, it's really great to be part of this community. You're really great people. Thanks. Thank you a lot. It was great to hear you. And we'll have a new talk right now. Bye.
The magic of computers and smartphones is the fact that they provide a very malleable interface - the screen. Without having to manufacture extra parts for every new application, this is a massive time and cost saving for engineers. However, this comes at a cost of accessibility and usability, as well as diminishing the physical connection one might have to the device they are using. Physical interfaces are now much easier to develop, and can even be constructed by the end user, since MCUs are now very cheap and readily available, along with lots of modular parts to construct interfaces with. Web Serial provides the added benefit of being able to use the highly distributed and easy to code with JavaScript / Web platform with these MCUs. This means rapid prototyping can be performed, along with user testing, very easily, making it much cheaper and faster to reach an end product. This talk intends to demonstrate some basic examples along with some steps to getting a process together yourself. - Introduction to Web Serial API, - An overview of the MCU market today, - Demonstrations of some homemade physical interfaces for web applications, - How to set up a basic web app using web serial, - Overview of reading values from a physical input component through a circuit and converting it to serial data, - Using browser-based circuit design tools for safe pre-breadboard experimentation using Wokwi and CircuitJS, - Building a simple physical input, through to reading the data into the web application, - Ideas for potential tangible user interfaces, summary, questions.
10.5446/56837 (DOI)
Hello, good afternoon. I am Vittorio Bertola and I'm here to give you an update around the digital market, the new European Union law that is going to address competition on digital markets. So I already gave a talk on this last year in this dev room, so I hope you remember it. I will still be recapping a little for the benefit of those that didn't see the talk and don't know what we are talking about. If you really don't know what we are talking about, I encourage you to also watch my main room talk tomorrow and Sunday at noon. We should be a much broader discussion of all digital sovereignty aspects and then we'll end up with an also a recap around the digital market sector. So I am an engineer, I am a digital rights activist in the 90s and I'm currently working for OpenExchange, which is a German open source software company. And so where are we? Well, we're still checked into the hotel in California. So I mean, we are still as Europeans bound to use all this nice service provided mostly for free, not always by the big companies that you really can do without. So it's even if you wanted to do without them, it's basically impossible for a number of reasons. And that's, I mean, partly it's the result and it's also the cause of the sheer economic size of these companies. So we saw last year that I mean, did these companies, well, the five big companies were the five biggest public listed companies in the world and they had reached almost over $2 trillion in value. And now one year later, we're nearing the trillion dollar mark. So $3 trillion is like France's GDP, the entire wealth produced by France in one year. And so it's growing at an incredible pace. And so the problem is becoming bigger and bigger. And we're getting new companies now. I mean, Facebook changed name to Meta, Tesla Blockade and actually overcame Meta in value. So the problem is still there. It's not good, but and this is and we're still stuck with these silent services. I mean, I usually use instant messaging as a service because that's the easiest example. Services built as well. Gardens in which you're forced to stick inside into I mean, if you install an instant messaging application, you can only exchange messages with the users of that application differently from email. And so I mean, you have really have to install all the different instant messaging. So if you want to talk with everybody, and if you have a new instant messaging app, it's basically useless because no users are in it. And it's very hard to convince people to move in yet another instant messaging service. And so this prevents competition and prevents you from running your own services. And yeah, and we're still closed in. And another topic that has gained relevance during the year is bundling. So bundling was also quite closely debated. And there was a last mint amendment to the DMA going at the very end of the plenary discussion in the parliament. It was the only one approved. So I mean, gives you the idea. So bundling is basically the practice to which the dominant companies in one of the platform services, especially those that, for example, dominate the operating systems markets, especially mobile, but not the only ones. I mean, they force you or push you to use also their services for other services. I mean, these disadvantage in competitors for those other services. And this is done by bundling them together by pre installing them. So you get your own by phone, you already have the search engine from the same company installed by using the defaults on the devices by integrating them so that the platforms own apps have a better performance because they are more integrated with the APIs and address and so on. And so we especially in the mobile market, we really are at the point in which we have a lot of concentration. And so I before getting to the actual DMA, I wanted to mention a couple of technical issues that have been the subject of meetings, let's say, during this year in around this legislative process. And one is upstores, which are I mean, I think this is also an issue, especially with Apple's upstores in the US. It is an issue mostly because of their 30% tax. So the fact that basically they say that if you want to, if you as a developer want to distribute an app for their own for iPhones, then you have to comply with their own rules. So they set the rules and you can only I mean, the apps can only be installed if they are within Apple's upstores. I mean, basically, there's no real way of doing other things there. So the app store is sort of a monopoly. And you have to accept conditions, including the fact that if you have in a purchase is you have to give 30% as a commission to Apple, which is exorbitant. And so this is a clear example of bundling, knowing which many functions are bundled together, like finding apps, indexing apps, showing them to you, helping you to install them, so downloading them, installing them, and then managing subscriptions and then payments. And all these these functionalities are bundled together, and you have no way to separate them. And so to pick a different provider for just one of them, actually, you cannot pick a different provider for any of them. They are all managed by Apple and there's no alternative. And this is really not a technical need. I mean, we never had upstores in computers, we have package managers, but they don't have all these restrictions, they don't force you to go through them for me to pay for whatever you want to do with applications. So there was really a discussion on what is the Apple tax for. So is it for the payment system? It looks too much. It's the cost of examining the code and ensuring that it doesn't harm kids, which is what Apple says that it's one of the reasons. Nobody ever asked for this. These are royalty, but computers never work this way. So we never paid royalties just for the pleasure. So you see some of the closest in the DMA are directly targeting this problem. The other problem I wanted to mention is encryption, which is a much broader discussion, but it's not about the philosophical, the power side of the DMA discussion. So first of all, I want to be clear, encryption is a good thing. You should encrypt your communications. Definitely I'm not in favor of state and back doors in encryption. So I'm not arguing on this. I think it's good that we have been progressively encrypting all protocols in the last 10 years, but there are different perspectives on this. So the problem now is that the discussion encryption is not really just about privacy and freedom, which is how it is commonly framed, just like, okay, let's encrypt stuff so we will get more privacy. It's, we will bypass censorship. It's partly that, but there are other things in it. And partly the real point today of the discussion around encryption is the discussion around control. So we are building more and more services, devices in our homes, but increasingly also applications on our computers and smartphones that just bring up an encrypted channel to cloud service by the maker. They send data. You have no way of controlling what they send about you. You have no way of stopping them or preventing them because they don't work. If you, if you find, if you even find a way to detect the connection and stop it, then the device would stop working. So this is really a loss of control for users, even before then it is for governments and ISPs. And this is actually functional to the preservation and in all these dominant positions and in general the position of power of service makers. So I mean, I have examples related to DNS because that's what I do, but it's the same with other encrypted protocols. We've seen the, the, the move to encrypted DNS in a way that at least by some browsers was managed so that they, they would just ignore your DNS settings and send their DNS queries to their own or a friendly DNS resolver in another country, which was out of reach by the government and by the ISP. And so they would make control impossible by the ISP and by the government and by you, by the way. But so this was framed as a discussion around DNS filters. And so the first thing to say is that Europe likes DNS filters differently from the US, which is a strong first amendment. So there are, I mean, it really depends on the country. Some countries don't filter anything, but there are many countries that really filter thousands of websites. And this is because of their cultural values. And, but there are things that should be filtered. I mean, first of all, there's a growing usage of security filters like blocking malware and phishing and botnets, which is very important for non-technical users like my elderly mom. I mean, she's not able to, it's not easy for non-technical people to avoid phishing websites. And so if someone blocks for them, I mean, it's much better. There's parental controls, there's blocking child sexual abuse, there's blocking gambling websites, counterfeit shops. Each country has different policies and wants to block different stuff. But I mean, independently from what you think about DNS filters, the problem is that the local DNS or the local locking system is also the only control point that the government has to prevent a foreign service operator from reaching their citizens if they don't comply with the laws. So this doesn't really completely apply maybe to big tech companies that have subsidiaries. So they have, I mean, a hard presence in European countries. But for all the others, being able to avoid the risk that the council just shuts you out of their market is really a game changer. And from the government's point of view, they are afraid of this for this very reason. So if everything gets encrypted, DNS and also the direct communications, blocking the internet platforms, blocking internet services from the broad becomes impossible. And it's even worse because this is also applying to you as a user. So I mean, I don't know if anyone else has a piehole, which is a small DNS resolver you install, you just have a Raspberry Pi and run it in your home. And it's something that, I mean, if all your devices use it as a DNS resolver, it will just filter out your ads. It will block the connections to tracking and tracker websites and filtering and surveying advertising servers. And so it's a user, I like this. But then if this new paradigm takes ground, then also this kind of user job of local fields becomes impossible. So the message I wanted to send is that there's a growing understanding in the discussion is around control and more is happening. It's also around centralization. So the very phenomenon that allowed these companies to grow. I mean, the last year, we've been seeing the so-called oblivious connection model. I mean, this is sort of a scaled down version of Tor with only two hops. And basically, so all your traffic gets encrypted with a key for a second proxy, but first it gets sent to a first proxy. So the first proxy gets to know your IP address. But I mean, they also get your traffic, but it's encrypted in a way that they cannot access. So they don't know what the traffic is. The second proxy gets your traffic, decrypts it and sends it on. So they get to see your traffic, but they don't get to see your IP address. So this is the coupling information and making it harder actually for people to track you. And the final destination only sees a flow of aggregated traffic from the second proxy. So it gets even less information about you. This is what Apple has been implementing in the so-called iCloud Private Relay, which is a new paid additional service for the moment. And so they run the first proxy for everyone and the second proxy is provided by some of the big dominant CDNs like Cloudflare, Fastly, Akini, and the under contract with Apple. So what's the point here? The point is that this can be a very good thing, especially if you're concerned about your ISP, having a look at your traffic or if you want to bypass governmental blocks, it really provides you more privacy and reduces what the website see about you. On the other hand, the coin is that you don't really get a choice over the proxy operators and even a bigger coin, in my view, is that now all your internet traffic goes through Apple. And in the long term, the question is really what guarantees you that Apple and their supplier that provides the second proxy will never start to cross-match your data, because if they started to cross-match your metadata, at least, they would basically see all your internet traffic and they would be able to track you much better than your currently being tracked today. And so there is some concern in Europe by the industry but also by the Commission, the policy environment that this might not be a desirable model for the long term, or at least there should be some guarantees and some discussions around it. So the message is that encryption is really about control and we'll see, I guess, more legislative processes around encryption, more rules and more discussion around this. So now let's get back to the actual legislative processes. So what's been happening? Well, in the meantime, the legislative processes have been going on and so they are proceeding, but in the meantime, all the countries are starting to think, hey, maybe we can make some money. That's really the only weapon they have against the big companies. So they can find them with the current, I mean, until the new rules get in place, I mean, they can only apply the old sort of blunt rules around competition and privacy, especially GDPR offers quite some opportunities. And so there have been lots of fines to this big company. I mean, this is just in the last 12 months. We've seen France finding basically 150 million euros Google and 60 million Facebook for the misleading cookie and the fact that there are cookie pop ups would basically push people to say yes and just proceed. Same, I mean, Italy has been finding Google and Apple for lack of correct user information on privacy treatments and data processing and so on. Italy has also been finding Google for Android Auto for well over 100 million euros. Allegedly, they mean Google was using Android Auto to shut out the competitors and preventing some of the apps from being installed there. And I mean, there was a lot of talk at this very huge fine, 1.2 billion euros that Italy gave to Amazon for the anti competitive practices in general in the logistics and delivery chain and so on. So these figures are starting to get higher and higher. Still, they are negligible. So the feeling is that this is not really effective. The companies will challenge them with all the lawyers they have and in the end, if they have to pay, they will pay. But this is not disrupting their positions. The interesting thing is now, I mean, even San Marino, you can find Facebook for four millions and four millions might look a little but it's, I mean, for a country of 34,000 people, it's actually 120 euros per citizen. So this is, first of all, for some of the smaller countries, this might actually become a governmental business model in a way. So that way to get some relevant amount of money. So in the meantime, as I was saying, the legislative process has been proceeding. These are all the things that are still in the discussion are going on. The digital services sector, which is the part around content moderation and so the digital market sector is about competition. Welcome to that data governance sector, the rules for open access to public data. There's a new entry this year. It's the computer chip sector because Europe realized that during the crisis of the last 24 months that if the factories in China and Taiwan stop sending you chips, then all your industry has to stop because they cannot get semiconductors and chips and boards and whatever. So there will be funds and policies to promote the birth of new European factories around them to make semiconductors. There will be the corporate tax directive implementing the recent agreement on minimum corporate tax, which should be at least 15% or according to higher under 50% and no more than that. So that was the agreement. There will be the revision of the IDES regulation. So we'll get maybe a newer generation of open public identities. And there's Gaia X, which is an interesting process originally a German one then European. It went through a sort of soul searching. And now it's a basically a consortium that is working on common cloud standards for portability. So basically provisions are standards that would allow you to easily move your applications and services from one cloud infrastructure provider to another. And similarly also data ontologies for interaction. So if you want to have multiple companies in the same niche working together and exchanging data to build common integrated services, then you need common ontologies and the project is trying to work out, basically work them out. It's been producing at the moment a bit of software and a lot of paper, but I think it's getting better. So it's it's we should keep an eye over it. So as we discussed last year, the remedy, one of the remedies, not the only one, but one of the remedies for this situation of a wall gardens is interoperability. So let's get back to the original internal principles. Let's make sure that all these applications are built as modules and each module is interchangeable so that you can if you are dissatisfied with the specific function has been implemented by one app or service provider, you just can switch to another service provider and everything else will continue working. And this also requires, as we were saying, care, especially on mobile phone, so that the user actually has the user interface opportunities to switch easily and pick a different application and so on. And so we're trying to get to a world of interoperable apps in which I will only have one instant messaging application and I will be able to use it to send messages to all the other I am users on all other I am applications. So I can actually pick the best one, even a new one, and move into it and then still keep my contacts. But at the same time, this will help competition since there will be a chance for people to try new and actually use new applications, even if there are no users in them yet. So what's the pain with the digital market sector? So you will remember that this started as a proposal by the Commission in December 2020 and the entire year was spent discussing the proposal and public consultation and then the Parliament took care, they started discussing it and several commissions, committees, and in the end it was approved by the Parliament in first reading on the 15th of December, but the Parliament made 229 amendments. So there were significant changes, mostly in the direction of expanding the scope and strengthening the provision, so the Parliament is really in favour of this act. Now according to the legislative process, what's happening is the so-called tri-alogue phase, in which this Parliament approved version has to be discussed with the Commission, which of course still has its own initial proposal, and with the Council, so the Member States, that have maybe different ideas on what should go into the text. And currently the EU is under the French presidency, they really care about this, they want to deliver this and approve it before they expire, the presidency ends in June, because I mean for political reasons, to show that they could manage to conclude this. And so there's a good chance that the tri-alogue will end in March, April, pretty soon, and then the Parliament takes to vote again and the final approval could happen in the first half of 2022, and so in the end we could get the law at least approved, and then of course it will take one, two, three years maybe for implementation and getting into force, but at least we will approve the final text. So the law is, well it was aimed at business uses, but one of the things that actually we as an open source community, both the NGOs, the digital rights NGOs, and the open source companies in Europe argued for, and we mostly made it, is that it should not really be about the business uses, it should also consider the end users, and if you introduce the rights, I mean at least some of the rights, those that make sense, should apply to any user, not just to business users or internal platforms. And so that's more or less happened, you see, the things in italics are the changes that the Parliament brought in respect to the original Commission proposal. So it's aimed at very few companies, the global gatekeepers, there's been a lot of discussion on the criteria for a gatekeeper that the only change at the moment that the Parliament made was that the threshold of turnover, annual turnover, to be qualified as a gatekeeper, has been moved to 8 billion euros, not 6.5, but I didn't do the math, possibly there are some companies in the middle that were pushing for this, I don't know. In general, this is meant to be a new antitrust instrument for non-traditional dominant positions, so stuff the economists in systems saying, but this is not a dominant position, but still everybody else realizes it's a problem for our competition. So this is the current list of covered services. So the first part is from the Commission's proposals, it includes marketplaces, Amazon, Booking.com, it includes search engines, social media, video sharing websites, instant messaging, operating systems including mobile ones, cloud computing services, very vaguely defined, any advertising service provided by the people in the above services, and then the Parliament added three specific more services, web browsers, voice assistants, and smart televisions. So this is the exhaustive list of what will be covered, meaning that any other type of internal product or service which is not in the list will not be covered or affected by these provisions even if they had a gatekeeper in it. So there are two types of provisions, the article five provisions are the immediately executive ones, you must not do this, and the list is more or less the same. There were some changes, some things were moved, and we will see it, some things were moved from article six to article five, but this is really more or less the same list that I showed last year. But now we have some specific anti-bundling clauses. The first one was already there, it's been refined, it's been moved as I said from article six to article five to become immediately executable by the Parliament, and it's about the fact that, I mean, business users should be free to use only one of the services without being forced to use any of the others, both in terms of auxiliary services like delivery, for example, you use Amazon's marketplace, you must not be forced to also use Amazon's delivery service, and also others, so you use Google search engine, but you must not be required to access, to be able to use Google search, to also use Google's video sharing service, for example. And then there's a new one, this was really pushed by the Parliamentarians, it was a Parliament amendment, and it's about the installation. So basically now smartphones and devices will be required to, I mean, at the first installation of the device, we will first bring it up, you will be asked which service you want, you will be given a list, and you will choose, for example, which search engine you want to use. And also the operating system must not prevent the uninstallation of their own apps, so if you have Android, you must still be allowed to uninstall Google's search engine and maps and all the other apps, and so free up space and use something else. Then there's the interoperability clauses, and this was really the subject of a big fight for the community, and you remember that last year the only thing we had, apart from this real-time data probability clause, which is nice, but not really very important, and it was this clause for interoperability for an auxiliary services, so the only thing that gatekeepers would be required to open up would be the auxiliary stuff like payments, identification and logins, delivery, advertising. Now this was expanded and there's an explicit mention of access to operating system features, so that the operating system cannot advance their own apps in regard to other competing apps for access to APIs, slides and whatever, which is good. But then the success we had is that we actually managed to convince the parliament to add two clauses for interoperability of specific core platform services, so instant messaging and social media, where the gatekeepers of these two services, which are Facebook, the same company, Facebook, WhatsApp, Meta, sorry, they will be required to open up their services and interoperate with any competitor that wants to, and the details are of course not yet to be defined, but the clauses are there, at least in the parliament's version. So there are some concerns around this, because then of course the text of law only gives you some high level definition, but then there will be an implementation phase, and even in text there's still a discussion on the wording. But the institution, I mean, is concerned, I mean there are people within the institutions that are not convinced around this, and they say, we're not convinced that there is actually industry demand, because they've been told by the gatekeepers, I mean also by the closed source industry, that they don't want this, and they're worried about privacy, because the father of the gatekeepers was like, I mean, yeah, but now we will have to exchange and send personal information to these unknown interoperating servers, and it will be a nightmare for privacy. I mean, some concerns are there, but it's not a real issue. On the other hand, from the community that have been concerned around the implementation, especially who will pick the standards, because there's, I mean, it makes a world of difference if the standards are really open or not, if anyone will be allowed to interoperate, or if the gatekeeper will be able to set conditions, ask for money, or put terms and conditions, and also there was the initial idea that we should get interop for all the other core services, which unfortunately we were unsuccessful at. So there are still many open questions, and the technical model, should this be achieved by each gatekeeper working out their own API and exposing it, which gives them more control, or should we actually explicitly write in the law in some way that there should be a neutral open standard, which I think would be preferable. But there's also an issue about business models. So we've seen some moves by some of these big companies, not met themselves, but for example, Zoom and Cisco and Slack and others, that have been investing in companies that want to provide you interop as a paid service. So I think that's it. I mean, rather than having a right to interoperate, a bit like privacy, it's a different view. In Europe, privacy is a right. In the US privacy seems to be a service that you have to pay for on top of the actual service. So I mean, again, this is a model that in Europe we wouldn't like, and so we have to make sure that this is not allowed. So if they have to interoperate, it's not like, yeah, go buy, you have to go buy interoperability from this company, which I invested, so I will still make money out of it. So this is the state of the discussion. I want to leave a little room for discussion and then I hope to take questions and then we can follow up also in private and thank you for listening. Because I'm quite high. Will the DMA actually have an effect? Well, there's a discussion around it. So the slides might change. Actually, the US is pushing the Europe down that more European companies are because I mean, because currently it's just basically about US companies. But this really will depend also on the implementation. So if if the gatekeepers will be forced to open up and use a standard open protocol, then it's possibly naturally will start using that. If it will just be forced to open up a light, clothing, their own proprietary ones, there will be interoperability with them. But I don't think there will be any other effect in general on the entire ecosystem. Excellent. And then the next question would be regarding the interoperability of messengers. Perhaps you can give a really short question afterwards. Yeah, we can discuss in the chat the whole topic. Yeah, well, basically, I think that the fact that you can interoperate, it doesn't prevent your clients from, for example, deciding that they will never send a message to any telegram user because it's insecure or that they will negotiate an encryption level which is fine with them and not accept communication. So it's still controlled by you as a user. Okay, thank you.
Last year we introduced the reasons and the plans for the new Digital Markets Act of the European Union, regulating online markets to further more competition with the dominant gatekeepers. In 2021, the act was discussed and finally voted by the European Parliament, which expanded many of its provisions and strengthened the new rules. In 2022, the act will be negotiated again with member States and then, possibly, finally approved by the Parliament. In this update we will explain in details what has changed and where we are. The Digital Markets Act introduces rules for "gatekeeper" digital companies, defining how to recognize them and setting obligations for them to fulfil. These obligations generally affect their business practices, such as the terms and conditions that they impose onto their business users and the consumers; the way they enter new markets and exploit their strength to conquer them; the opportunities that are left for users to choose competing services or move out of the walled gardens. Principles like unbundling and interoperability have been recognized as useful tools to promote competition. It is still to be understood whether all these obligations will survive the negotiation phases and will be confirmed in the final act. However, discussions are also starting on how some of these new provisions may be implemented, and how to define the details and the technical standards. The talk will present the current situation and solicit feedback and comments from the community.
10.5446/56839 (DOI)
Hi, and welcome to the organizers panel of the legal and policy dev room at FOSDEM. You stuck with us this long. This is the end of our second afternoon of material and we organizers wanted to come here talk about a few relevant topics of the day and say hello to you and tell you who we are. So shall we introduce ourselves, everybody? Some of us have already been on moderating panels, but my name is Karen Sandler. I'm the executive director of the software freedom conservancy. I care about software freedom because I'm a cyborg lawyer and I have a pacemaker defibrillator implanted and I would love to see the service code in my own body. So Alex. Okay, I'm Alexander Sander. I'm a policy consultant and yeah, therefore, I'm trying to make sure that everybody in the European Union knows about free software and the advantages of free software and especially decision makers and yeah, so that's my job. And I hand over to Bradley. So I'm Bradley Kuhn from software freedom conservancy. I'm the policy fellow there. I've been helping organize this panel for a while and excited to work with our new team that those of you who have seen our panel in person at FOSDEM in the past see that we're a little different group here. So Richard. Oh, Max. I just continue. All right. So my name is Max Miel. I work for the Free Software Foundation Europe. I work there in different areas, started with policy and meanwhile also more in the legal area where I coordinate a few initiatives like reuse for instance, which we will definitely talk about later. And yeah, I stuck with the FSB for a long time and yeah, I care about free software too as many of the most or all people here in this panel. I'm Richard Fontana. I'm a lawyer at Red Hat. My work involves mostly open source and free software related legal matters and I've been doing that for a long time. And I've been involved in some way in helping organize this step room, excuse me, since pretty much the beginning. So happy to be here again. Yes. And now is a really good opportunity to say thanks to my fellow CORE organizers who have done this all of these years with me and thanks to our new organizers for participating with us. It's been a really, really challenging and also fun year as it turned out thanks to all of your participation. I want to take a moment to thank Tom Marble who had been a co-organizer for previous DevRAMS who always worked so, so hard. We keenly missed his absence this year and I hope he's watching along and know that we really appreciate all of his past work and have thought about what he would say at every step of the process. I'm sure he's watching right now and will be saying something in IRC for those of you following along in the chat. So I think that leads us perfectly to the first topic to talk about. Normally on this panel we talk about what we see are themes that come up in our DevRAMS or really important issues of the day. And I think one of the things that is the most poignant is simply events in the age of COVID and how operating during the pandemic is an opportunity for software freedom but also a challenge as proprietary solutions are foisted upon us. Yeah, I've been watching this really carefully this year and I'm very concerned that the video chat quickly became a center of how people got their work done and at least here in the United States and I'm curious to hear from my European colleagues that's happened there. It's so dominated by a single proprietary software company, namely Zoom, that people in the United States use Zoom as a verb to mean video chat now. They talk about doing a Zoom, talking on Zoom, being on Zoom. And it's very frustrating to even tell people that there's an alternative that's available. We're using Big Blue Button to record this panel. Jitsi is being used by Fosdom to do the live chat during the conference. And they are excellent free software technologies that we just have had great challenges, at least we've seen here in the US, getting others to pay attention to. Are you all seeing the same thing in Europe, has it been difficult to get people to switch off proprietary video chat platforms? I would say definitely, yeah. We had those issues as well, especially in the beginning of the pandemic. But I have to say that we also saw a lot of positive examples here. Like for instance in the educational sector, Big Blue Button is a known thing. I know a few students right now, younger and older, and most of them are aware of using Big Blue Button when I invited them for an association's meeting. So at least that's some good news. And we have a lot of activists who spread the word about those alternatives, not only regarding video chat, but also other collaboration tools that we see. So there's also a pride side of things. Also maybe to add here, what we've seen in the very beginning of the crisis is that many companies use the term free software in order to promote their software, which isn't free software. And it's more likely a shareware, a freeware or whatever. And they had subscription for three months or stuff like this. And they really tried to get on the market with using the term free software. And this is also what we've seen in the beginning and what we tried to challenge with some news articles and press release on that. But yeah, this is also something we should keep in mind for future that the term free software is not connected to these kinds of offers. Yeah, indeed. And since the early 2000s, I've encouraged people to just at least in speaking in English to say software freedom instead, because it's less ambiguous. Do you have software freedom is the question to ask people. And it's not just happening, I think with video chat during the pandemic. There's just this whole group of proprietary technologies, many of which replace technologies that were invented in free software. So if you look at things like Slack and other proprietary technologies, we've had free software chat clients since the beginning of the internet in the early 1990s. And now these companies have found a way to sort of insinuate proprietary technologies that replace standing free software applications in the marketplace. And so it was so bad that I recently participated in an online conference where every technology for the speakers was a proprietary technology. The speaker's guide was in Google Docs and the back. It wasn't on Slack, but it was on Discord was the speaker chat and the video platform for recording talks was proprietary and the online collaborate, the venue platform for the day of was a semi free software license under a non-commercial use only style license. So even free software conferences are having a challenge. It's kind of, it's very impressive that FOSDEM, while it's been certainly very difficult to organize as a dev room remotely this year, one of the things that one of the organizers of FOSDEM told me last week was that their goal was to prove to the world that you could run a conference as large as FOSDEM as an online event during COVID using only free software. And they've, as we've seen as last two days, they have succeeded and we've pulled this off. So it's really impressive. And while I, well, I've had my frustrations of trying to organize this event remotely. It has not been fun. I wish Tom Marvel was back many times and he's been laughing at me every time I talk to him appropriately so because he did all the work in the previous years. I'm really glad that this has happened so that we can show that these events can be done with all free software. Yeah, I was going to say that I don't actually feel like this year, this past year has been really significantly different. And I think that's one of the points you're making, Bradley. I've watched the proliferation of non-free, non-open source tools even in sort of technical or developer communities that are oriented towards free software development for almost as long as I've been involved in kind of doing legal work in this area at least, maybe not going back further than that. So maybe things have accelerated somewhat, but I see it more as a continuation of a pattern. I think that's true. And I think that, like, I think that what we saw was like a highlighting and exacerbation of that trend. And seeing that happen in the health space and we had a whole panel that mostly talked about this issue. You know, I think that the idea of focusing on software freedom has never been more important because so many of these non-free sharing solutions are being promulgated and people think that they're helpful in an immediate emergency like the MENTRONIC ventilator that has your, you know, that is allowed for use but doesn't grant the rights to go forward. We're planning for this pandemic and our emergency needs, but we're not planning for the next pandemic. Yeah, I think, Fontana, you're absolutely correct that this has been an ongoing problem that's been exacerbated, especially in the developer communities. When I think about it, just to compare it to some of the other panels we had in the compliance panel, we didn't talk too much about the compliance tools thing. And the main reason as a moderator, I didn't push on that issue so much is because I know many of the, most of the tools that people use, software tools are non-free. Falsology was mentioned on the panel, which is the only FOSS tool for compliance, more or less. There's a few others, but it's certainly the most popular one. All the other tools that are very popular are proprietary. And even the collaboration communities that develop those standards and tools, they're using proprietary software, a major compliance tools process, you have to agree to a proprietary license and agree not to reverse engineer the mailing list software just to join the mailing list of that project. So we're really seeing that become more and more that people doing FOSS aren't using FOSS tools. And I can't help but mention GitHub, which is the most popular FOSS development site, is a proprietary software site with tons of proprietary JavaScript that people are using to develop FOSS every day. So I think that's something that the pandemic has just made more obvious, but you're absolutely right on time. It was there for some time now. Yeah, it's sad because we actually have the tools, like we have the alternate tools that work like Big Blue Bun. And if we were all to put our weight behind them, they would improve. And it would be this amazing thing, but instead, because we're instead as a society doubling down on these proprietary solutions, and we free software contributors are the ones who are locking that in. So what do folks think? So what are some other themes that you think have come up over the last year that we ought to address? Is there anything else that was major that happened this year that we should be sure to talk about? I mean, in terms of health, what I found kind of really interesting was the discussions about the tracing apps. So there's also somehow a light at the end of the tunnel and this discussion around these tracing apps and interoperability and that we can share data across borders, especially in Europe. That's an issue. And so this helped a lot, especially on the decision maker side. So they now have a better understanding of why open source free software is so important and why, especially if it comes to sharing across borders and across languages, it's so important that we have this tracing app in Spain, in Germany, and I don't know in Austria in a different language, but they are able to talk to each other. And this is only possible because it is free software. And we had a huge debate here in Europe around these tracing apps and especially around that it is free software. And so this might help at the end and also in the health panel, we've seen that there had been loads of hackathons, for example, that the results have been published as free software because it is a good idea to do so. And I think this panel was also very interesting in this regard. If you speak about health apps and the corona tracing apps, it's still quite interesting that the platform for all of this, while we have two gatekeepers here with Google and Apple, and it took the free software community quite a few of months to make this possible, like to have these exposure notifications API all implemented with free software. And so people can just install it from, for instance, after for Android phones. So we're again in the situation where the software itself is free, but the platform is not. It's quite interesting also for publicly funded software to see basically can I really use it with as much software freedom as possible? And it turns out it's still quite interesting a long way to go. Yeah, certainly when you compare it to the United States, all the new stories were about Google and Apple, we're going to solve the contact tracing problem and all of the apps that are proprietary that people are using here, so I'm so glad to hear that in Europe. As we heard about in the, when I was listening to the, the DMA talk, I mean, the DMA talk was sort of saying, well, we need to make this law so much better in Europe. And I was looking at his slides of stuff that's already in your laws in Europe. And I'm like, I wish we had that much like what you already have in Europe, I wish we had here in the US because there are no laws that are very friendly to interoperability and free software the way that you already have in Europe. So, so kudos to you all who've done policy work in Europe to make that happen over the last 20 years because we, we, we unfortunately do not have a system where it's easy for us to get that stuff into our legislation here in the United States. Yep, and a shout out to Deb and Hong Fook's talk for, for, for bringing the conversation global. We are a global community as free software contributors. And it's, it's important to, to learn from all of the, the work being done in different places, especially where it's successful. And I, I agree it's the US is not, is not a great example of that, which maybe is a good transition to talk about something that has happened in the last year that we didn't cover in the Dev Room Bradley. I don't know if it's makes sense to talk about just to fill people in on the DMCA stuff when we talk about how bad things are in the, in the United States or Richard, I don't know if you have any. I mean, the start of the fountain I talked about this a lot. I mean, the startup culture in the US has had some influence on FOS and not usually particularly good. I mean, do you want to, do you want to talk a little bit about what's happened in the last year with, with, with some of the startups have done with regard to licensing that's been really much in the news the last, the last year? Yeah. So I remember we, we actually talked about this in our organizers panel last year. I think it was last year and not, not the year before. And it was a major topic then. So we were seeing this trend of, you know, I want to say startups, but it's not, you know, I'm not sure it's, it's limited to what I would call startups, but smaller tech companies that have grown up around a sort of vendor controlled free software slash open source project. You know, typically using a certain type of governance model that emphasizes, you know, using a kind of asymmetrical contributor agreement, a CLA or whatever. And not really having a very, you know, a significant contributor community in part because these, these companies tend to be hostile to, to outside contributors for various reasons. And then these companies sort of a few years ago started experimenting with licensing models that resembled, you know, sort of free and open source software licenses in some respects, but deviated from them in significant ways. And you know, MongoDB and the server side public license was the first notable example of this and at least in the modern era. And that was about three or four years ago now. But we saw a number of other companies moving in this direction. And we talked about that a little bit last year. So very recently, the latest company to do something like this was Elastic. Earlier this month announced that it was going to use the server side public license. So the license that MongoDB had introduced for, for some of its projects. And so, so, you know, this, this is, you know, from my perspective, at least a pretty disturbing development. These companies have, you know, in part been sort of blurring the meaning of, you know, it's really open source, not free software. So they've been, they've been blurring the meaning of open source and sort of trying to, to push on the boundaries of the open source definition. And, you know, this, the, the, the main feature of these various licenses is sort of, you know, I would say sort of use restrictions. So kind of prohibitions on, on use cases by competitors. Essentially, you know, to a large extent, these companies are concerned about competition from, you know, cloud providers. And that's, that's kind of motivated some of these, these license changes. But you know, kind of more, more broadly, I think this just sort of is part of a longer theme of, of, you know, tension that's existed, you know, between sort of like free software or open source as a means of kind of building a basis for business success versus the kind of ethical goals that, that lie behind, you know, free software and, and I would say open source as well. And, and we know year after year, we continue to see interesting examples of this. And this is sort of the, the latest, I guess. And interestingly, we had a talk in our, in our Dev Room this year, you know, about this kind of proprietary licensing business model. I think when we were doing the acceptance for the talk, for the talks, I was sort of most skeptical about that because I didn't want to provide a mouthpiece to the proprietary relicensing regime that MongoDB and Elastic and other such companies are putting forward. The really nice thing, I kept an open mind about it. And I actually think it ended up, I would give that talk the best talk of the year award on our track because I think it really laid out in a very clear way how the, the, the Q, QT situation impacted the KDE project and how the KDE project by being a strong existing free software project probably, and still to this day, I think probably the largest user of QT, of anybody in, in any software space at all was able to leverage their, their community power to assure that QT remained free software and to bind the company and its successors to continue to improve the free software. It's quite a magic trick from my point of view that they were able for so long across so many owners of QT to assure that, that, that the public version of free software version of QT did not become a, you know, just a, you know, unmaintained kind of afterthought release. And that's something that I think was unique to KDE. I disagree a little bit with Cornelius's conclusion that we could do this for any of these projects because I think it was almost a artifact of its time that, that open source was not something that someone wanted to market around. I don't think any QT could go to other customers in the late 1990s and convince them to buy, to, to, you know, to buy based on it being open source, whereas MongoDB and Elastic are seeing that. The other thing that's really disturbing about the Elastic move different than the MongoDB move is they moved from a free software license to this SS public license, which is, is not a FOSS license. And it's played into this view of copy left versus anti copy left because MongoDB tried so hard to convince people the SS public license was the future of copy left as they, they put it when they began marketing it. And here we have Elastic switching from the non copy left Apache license to a, to SS public license. And so I found it very difficult as an activist and a policy person to explain the nuance of well actually the SS public license isn't a copy left license. And if it were, if there were a copy left license they switched to, it might have helped them fight Amazon in the way that they wanted to. But what they did instead is they switched to this non free license to fight Amazon. So, so I, and I have, I'm curious how that's playing out in, for my European colleagues in Europe and if folks are, folks are able to see that nuance in a way they haven't been able to here in the U S. I'm not so sure whether there's a big difference between the European view and the US review. Definitely troubles also us as well at the FSA fee that this happens. Definitely. So I think the difference here between the, the KDQ model that Cornelius presented and Elastic is well, cute and, and KD they wanted to cooperate. And they were mature enough in the sense of like, let's cooperate with each other. So they had this agreement and it has been fulfilled also by the successors basically. So we see here a successful fruitful cooperation while as Richard already said, a CLA is this as a asymmetric way of contribution. And so, well, this has been basically laid out that this could happen. And I think, I think a big topic will be how to, how like the free software contributors want to interact with companies or with organizations that might take their contributions away and make them basically proprietary. So this is a discussion to lead and I think it's not bound to the US or to Europe in specific. But yeah, I found this maturity discussion quite interesting, a shared theme among like Cornelius talk, but also in the compliance panel that you moderated Bradley, which I found definitely interesting. And also I would quote David from Huawei when he says that, well, you can really see whether a company is mature enough if their false compliance is actually a mature process and a good process that they have. And I quite like this that, yeah, companies think from the beginning on in free software terms and this as a thoughtless after product basically. And to be honest, I missed a little bit the mention of reuse here because I think this is a perfect example how communities and projects can fix, clarify their licensing and their copyright from the start on. And this is the thing that can be created or worked with by organizations and individual developers no matter which size they have or no matter the project size. And I would love to see this more that people care from the very start on like the Yorkter project that they are actually with every release, combined with the software, but also that they have properly declared the licensing and copyright of their project because I think we still waste too much energy into fixing problems after they have been created with tools like Phosology which are great and which we need, but we should put more effort into fixing those issues before they have been created. Yeah, and I guess we could tell our audience that there was an excellent talk submitted on reuse and our FSTEP Europe colleagues were surprised to learn that we had a, we had long ago created this rule that unless it's a substitution talk because of somebody not showing up, we've always made it that anybody, any organization that's kind of represented on the organizers panel can only have one talk from the organization in any given year at Fostem. So we did have to turn away some excellent talks from your colleagues at FSTEP Europe under that rule. So we're sorry that you were unpleasantly surprised by... No worries, I wasn't really surprised. That's fine for me and I love that my colleague also had his chance to speak in the strike and if people are interested, I gave a similar talk in the Open Chain Deaf Room, so just a pointer. So in the interest of the talk we weren't able to give Karen, so Conservancy had some work this year that I guess we could cover here that we would have submitted a talk about if not for that rule regarding our DMCA work. Do you want to talk a little bit about that that happened this year here in the US? Sure, sure. I was looking to transition to it a little bit before when we were talking about how much worse the laws are in the United States compared to elsewhere in the world. So it seemed like a very easy time to transition to the Digital Millennium Copyright Act in the United States which provides prohibitions on circumventing technological protection measures in order to even do lawful uses of the technology. And so there's a process every three years where folks are invited to propose exemptions to that rule and Conservancy and others have been involved on behalf of free software. Throughout we, you know, many of the organizations protest the existence of the law to begin with and then engaging in the three-year cycle allows us to propose exemptions. And so Conservancy in the past applied for and won an exemption for smart TVs and I personally participated in one for medical devices. And this year Conservancy applied for a number of new exemptions. Ones to allow us to basically allow us to circumvent so that we can see what software is running in a device so we can know if there's a GPL violation. And so basically circumvention being used in order to hide copyright infringement. So it's sort of a novel argument for the Library of Congress in the United States and I'm looking forward to see how that plays out. We've also applied for one for routers which connects back to our router freedom talk that we had earlier in the Dev Room. And we had Bradley help me out. We are the only organization that was unwise enough to file three exemption requests. And we've filed one for a small expansion of the privacy restrictions that are in. There's a privacy allowance already in the law here in the US but one of our filings looks for a kind of a small expansion of that privacy exemption that already exists. Which is not granted it's not as big of a deal because the privacy exemption in the law is already pretty broad which is fortunate. But we're trying to move that edge a little bit forward in our exemption request. And with the highlighting that if we have control over our software we're going to use it to protect our privacy. And then I personally was also involved in an expansion of the medical devices one too. And so I'm excited about that process. It's granular but by moving the needle each time we start to see real freedom. And I think that because I think that what happens in the United States on these issues does have something of a reverberating effect globally. So it's good for people to stay up to date. Yeah and as part of that process this year I did some I spent two days kind of after we filed those I went digging trying to figure out why the DMCA is such a horrible law here in the US. And it's very it's very interesting history that it is worldwide affecting because it's because it's based on a WTO act the World Copyright Act I believe it's called WCT. It turns out the US kind of unsurprisingly implemented use the existence of this to bring in lots of things that media companies which of course many of which are based here in the US wanted as far as restrictions go. So our law here in the US goes much further than say the EUCD does in Europe but it's really a worldwide problem. And the amazing thing is that this this all started back in the early 90s. And so and then the DMCA is passed in the late 90s. So this is this is some 22 years of bad policy that we've had. And many people who are probably watching our deaf room like weren't even we're children when all of this policy went into place. And so we've looked at really trying to educate more about why these policies exist and how bad they are things like the MCA because most people have grown up with these as standards and the chilling effects that they create have become a regular part of life not just for free software but for all software. Yeah, in this regard, just a note, it was also quite interesting that the DMCA case about around YouTube DL had also an effect on European hostos, for instance. So we had a few cases here at least which I know personally in Germany where their mirrors of the software also had to be taken down and this is quite interesting that you might have bad regulation in the US but it has definitely an effect on on Europe as well as well as the other way around. Yeah, I just noticed that we had still one talk which we didn't speak about yet, which is the give open source a text break talk and perhaps Alex you want to talk about this a little bit. Since you you saw it as I know. Yes, yes, I attended this. It was also quite interesting. I mean, it's also a general question. How can we finance free software projects and it's not about a text break. Maybe it's a general and a fundamental question how we can get money into this. One solution can be due to these text things they have in France like you spend $10,000 and you get 66% I think it was like this back from the state if you if it's like for the for the whole community and stuff like this. So we have something similar in Germany. So I think it's a it's a general thing. So in Europe, for example, we have to you rise in 2020 a big research program. This has billions of dollars and I think a lot of more or more money could go into free software projects as well we have this open tech funds and stuff like this and discussions around funding in general. And this is something we should also think about and share some ideas and best practices now also think it's on the government side to fund free software projects. They use free software and they should also fund it and it's also good for our whole society and therefore there should be funds available in order to support these funds as we've seen now in the Corona crisis. It would be good to have some solutions in place before and yeah, so here, yeah, state money could could be a game changer in the future and we should make sure that there are funds available in order to make sure that there are good free software solutions in place for for other crisis and but also for the normal situation as well. Yeah they have been really interesting proposals in the United States over the years to provide tax breaks that would have real impact on the software freedom contributors in the United States things like things that proposals that were designed to to benefit artists that would provide benefit for free software developers. So in the United States if you make a donation of your code to a charity, you can deduct the cost only the cost. So if you're an artist, for example, you can deduct the cost of the painting like the canvas and the paints but you can't deduct your time and so even if you're a world famous painter and you could you know anyone else could sell your painting for millions of dollars, you can only take a tax deduction of your materials. And so there have been proposals in the United States that that have been to change that but none of those bills have passed. And so it's just interesting to to hear about, you know, possibilities elsewhere and to possibly revive some of those conversations in the United States too. So hopefully so this is obviously we said the pre-recorded part of our session given that we're at the end of the FOSM schedule by now probably all the online stuff should hopefully be working without any glitches. So we're going to hopefully join you all in the online chat after this and be able to to take questions from those of you that have watched our entire Dev Room here for the virtual legal and policy panel. And from my point of view, I'd be glad to be back in Brussels next year. So hopefully all the vaccines work and COVID coronavirus is the same as the flu by the time we spend time at FOSM and Moles around again, we can only hope. Yeah, I want to once again thank the FOSM organizers. It's so much work to put on a conference like this and they just did so much more work to make sure that nobody had to use proprietary software and that's so awesome. And I'm sorry we're not in person where we can't stand up and applaud them and also thank them in the hallway. And yeah, so, you know, I just wanted to mention that and then also we're also happy in the past we've had this time to hear about feedback from you all in terms of what you'd like to see in the future and play ways that we can improve the legal and policy Dev Room. And so we'd like to address questions first and then feedback. But if for some reason we don't have the live Q&A, feel free to contact us and give us that feedback. Bradley, do you want to tell them all what they should do in the room? I, we don't know all the details at the time of recording, but there's probably stuff on the FOSM website and show you how to talk to us next as we wrap up here. So by the time they see this. I was joking. I was joking. I was the normally say clean the room. Oh, right. Yeah, yeah, that's true. People don't have to pick up there. If there's any trash you left behind, it's in your own house, your own home right now. So usually we have to clean up the venue. So clean up your house now. Yeah, like, yeah, okay. Everybody go clean your house and make sure that it's ready immediately. The next group when they come in on Monday. All right. Well, thanks everybody. Thanks for watching. Thanks to my co organizers for another FOSM. Thank you. Bye. We'll do Q&A now. I think you should get started. Wow. So we have made it to the end of FOSM 2021. This is the last session of our legal and policy dev room. I want to thank the audience for sticking with us and being here for this whole day. We've still got some time though. So we organizers are here to answer all the questions that you might have. And yeah, so I'm going to just start going through the questions in the channel. Again, by how they were upvoted. So we'll start with the first one, which is from Krishna, which is what did you miss most about the physical FOSM and what was the positive aspect of the online event for you personally? So I missed waffles and I made the waffle recipe as recommended. I was told by my co-panelists that I should not bring the batter out to show it on camera. I did not have time to make the waffle before, but as soon as this is over, I'm going to make the faster maffles, which they won't be as good. Yeah, I don't miss that my feet don't hurt. That's great. Awesome. I missed the hallway experience, definitely. So the chatter, but I have to say, I have to admit, I'm really impressed by how this has been pulled off by FOSM, how the experience came across the talks and the discussions afterwards. So really good work. Yeah. I think also all these social parts and the social events beside the FOSM itself, this is what I'm missing. I'm missing Brussels. Yeah, but Max just said, great. Thank you to the organizers of this virtual FOSM here. It went very, very well. It was quite a lot of fun. And yeah, thanks a lot. It made so much fun. And I want to put a really fine point on the fact that we were talking a lot in our pre-record about the questions of conferences requiring proprietary software like Zoom. This conference was done with 100% free software. I hope you all had a good experience, but from our point of view, it was completely seamless. I mean, it's not, of course, it's not as good as a live event. But this is the best online conference I've attended. And I think it's completely unreasonable for anybody to argue they have to use proprietary software to run online events now. There was one piece of proprietary software and all this one CAPTCHA that you had to get through to make the, if you made the account on chat.fosm.org. But for that little 100 lines of code to be the only proprietary software involved in all of this, please support FOSM. I helped them launch the T-shirt. And I was the first person to buy a sweatshirt because I happened to be on IRC at like two in the morning, Europe time when they launched it. So I bought a sweatshirt. I crashed the database, but I'm getting my sweatshirt anyway. But I encourage you all to go buy the T-shirts and sweatshirts and support FOSM. They're mostly volunteers doing this to make this happen. It's been amazing. I echo that. They did an incredible job. I miss seeing them all in person. I miss seeing all of you in person. It's not the same. Although I am surprised that it was so engaging and so all-consuming that it was just like a real FOSM where I didn't have a chance to eat or drink anything the whole time. And I also missed the chocolate in addition to missing all of you. So Richard, what about you? Oh, I mean, for me, Brussels is a totally magical place this time of year, despite the weather, which is slightly better typically than the weather I'm at now in the Northeast US. But it just has a special place in my heart. I've been going to FOSM for so many years now. But the online conference is really impressive, I have to say. I'm really happy to see how well that's worked out. So the next question that was upvoted is a follow-up to this one, I think, which is to panelists from the US, how does it feel that at the end of the sessions and follow-up discussions, you still have several hours before it is getting dark? The question is the other way around. I got up at 4.30 both days. The first day, none of my alarms worked and Karen had to call me. Second day, the alarms didn't work. I would say, yeah, it's really the waking up that's the issue, not the rest of the day. It doesn't help that much. It's very weird to have FOSM and then have a day with my family. That is super strange and really lovely. And also, again, makes me miss everybody more. But it's snowing here, so it's very, very brightly light outside. And it reminds me of the year where we had this massive snowstorm at FOSM. Richard, what about you? Oh, there's so much. I'm going to use this question. Yeah, so part of the experience is actually the jet lag. And I'm not experiencing any of that now. I got a full night's sleep. And that's just like, it's not quite the same thing, but I can remember what it was like and kind of sort of cherish that. If you move to the West Coast, I basically have the jet lag, right? Well, it was great that the FOSM organizers would allow us to do two afternoon sessions rather than a full day session. It really made it a lot more manageable for those of us in other time zones. I didn't want to get up at 1.30. All right, so what's going to be the next topic we're going to discuss next year? Let's start with the Europeans, since they didn't get a chance to answer the previous question. Well, that's hard. I mean, we kicked the session off with the European open source strategy by the European Commission. And as they just started this, I would love to follow up on this and to see what they've done in the then last year. But also we have seen on the DMA and the router freedom, and there are so many issues we are working on. And yeah, so it's hard to predict what's the most important one. There are so many, and we've seen a lot. And most of them will be there, I guess, also next year. Yeah, I think these are the policy topics on the legal side. Also really hard to predict. I guess we share the same issues there. Perhaps another uprise, another project that goes to SSPL. But perhaps also that developers think about CLA's, this asymmetric relationship with the companies that are going into. Perhaps we will see some discussion there, I hope so. But otherwise, yeah, I think in the chat it was mentioned, the Google versus Oracle. Could be a topic, maybe, depends on the outcome. Yeah, and on the general side, the learnings from the Corona crisis. And so we also discussed it here in the pre-recorded talk a bit. And I'm pretty sure that we'll keep us busy for this year as well. Bradley, do you want to add anything since I cut you up? Yeah, I think what decides our content is a lot of times what submissions we get from all of you. And we encourage people to submit talks when the CFP opens, watch the FOSDA mailing list, which is where it's posted first. We'll try to promote it as many places as we can. But we can only do as well as the talk submissions that we get. And we're committed to making this the place where you can give a talk about an advanced topic like you saw. I love that many of the speakers had to apologize when they did something basic, like during the AGPL talk. They had to say, we're going to explain it in a minified JavaScript as we know that's too basic, but we do want to make sure we cover it. And so we want to see more advanced talks to submit them. Yeah, and let us know if you have a suggestions for topics that should be covered, even if you don't want to do the presentation. Sometimes if we feel like there's a really important topic that's not being covered, we'll put together a panel to address it. So just let us give us feedback. It's very welcome. So to get to the next question, how to make companies stop using the term free software for non-free software programs? So as I just said, we tried to challenge this with blog posts and press releases and also collected in a wiki alternatives. And I think this is what we should also continue, like creating awareness, trying to prevent people buy or get these products and run into a vendor login and creating awareness on social networks, like commenting. If they treat about it, for example, then just post another tweet and say, this is not free software. And so prevent people from running into this vendor login and create awareness, I think. Yeah, I find that inundating people with questions is good. Like the constant, this isn't free software. Can sometimes wear depending on who the recipient is. It's always a good idea. But sometimes just saying, like asking about whether the software provides the freedoms that we expect free software to provide. Richard, go ahead. Yeah, I mean, I think in a way that the question is ambiguous because it could be referring like specifically to the phrase free software or kind of in a broader sense, the kind of free software in the sense of free software and open source, because some people, more people use the term open source in the world to mean approximately the same thing. And a lot of the abuse we see of that set of two terms occurs on the open source side because there's actually so, I think there's so little awareness of the free software terminology. And I think that the term free software in the English sense of gratis software is maybe not as common today. Maybe that's just my own work perception. But I think this is basically a linguistic problem. And the only way you're going to solve it is through kind of a concerted effort to make people aware of what software freedom applicants see as the meaning of free software. Yeah, I mean, the term software freedom was coined a long, long ago. I've been encouraging people to switch to it as the generic term for what we do since the early 2000s. Many others have done the same. I just say, you know, say software freedom. And the other phrase I've been using a lot lately is user rights, rights of the users. And if you focus on those phrases, I think the ambiguities of the linguistic problem melt away. So the next question from JWF is what role does individual consumer awareness about privacy and open source since the pandemic began? Is there opportunity and outreach and advocacy that could be better leveraged in light of the increasing dependence and reliance on digital solutions since COVID-19? It's a tough question. Yeah, I was going to say, like, I think that, you know, to some extent, there's been like where we've been on this path for for people to understand these issues more and more and more every year. Like, I've, you know, I remember five, 10 years ago at my family functions, they all thought that I was, you know, I saw their eyes glaze over whenever I talked about how vulnerable our technology was. And they were very nice to me because they're a really nice family. But they had no idea what I was talking about, whereas that's changed over the last five to 10 years, and people seem to really understand that we are vulnerable based on the technology that we choose. And it's small steps that we're getting there. And I think that the COVID crisis has cut both ways. Like I think on some ways, people are really open to new solutions. A lot of people, for example, were using Zoom who didn't use Zoom before. So it was like an introduction of proprietary software, unfortunately, to them. But if we, for the folks I got to first, who hadn't been using video chat, they did start using Jitsie or Big Blue Button more. And so there have been a lot of opportunities. It's, you know, it's hit or miss. We have to really stay focused on making it about the next pandemic or the next crisis rather than about now, because we have to acknowledge that people are doing the best they can and that everyone is like in a really stressful situation. Yeah, I also thought a little bit about it. I also see this split between groups of people, like some who are really aware of these issues and receptive and others are not. I'm not so sure. I saw a lot of discussions recently about those logos, like we, I think in Europe or only in Germany, I'm not sure, these blue angel logo, so where they mark sustainable products and a colleague of mine is working to get free software for sustainable software, basically, as a requirement. Not sure whether this helps, like these simple symbols, how people can see that something is good, even if they do not fully understand why it's good, they trust in the logo or in the picture, basically, and not sure whether that's a solution. Yeah, I think that one of the things that I recommend, so my spouse who didn't regularly use Zoom, of course, is using it every day now for her work for a small nonprofit, I really encourage people to go to their community organizations and volunteer to set them up with things like Big Blue Button and Jitsie and other technologies. This is a place where direct local volunteer work can actually help. You can't see them in person because it's socially distanced, but you can call them on the phone and talk to them about it and possibly help them get set up with alternative technologies to the proprietary ones they need during the pandemic. Okay, so the next question is, where is Copy Left Conf this year? Karen, should we pre-announce? I mean, I think we may as well. Okay, so one of the things we were waiting to see how FOSDM went, because if FOSDM can organize an online event like this, we figured we could probably draft off of their technology. We are going to focus on trying to do it not all at one time. I am not a big fan of asking people to get up at weird times to go to conferences. So we're going to try to do it as a seminar series on Copy Left over the later part of this year. So that's what Copy Left 2021 will be like. And we'll announce details on sfconservancy.org when we have them, which we do not right now. It's really fun to be there all day talking about issues that we care about, like Copy Left when we're in person, but virtual conferences are just exhausting. So we're going to probably, the sessions will probably be like an hour at a time over a few weeks. Now, a question that may take all the rest of our Q&A time, which is, how is the SSPL not Copy Left? Well, I mean, so the quote I've been using, it's a common quote in various places. I don't know who sourced it. But I keep saying every tool can be used as a weapon if you hold it wrong. And I think that whether or not the SS Public License is a Copy Left is sort of not that interesting of a point. Maybe it is, maybe it isn't. But even if it is, it's an abusive manipulation of what Copy Left was supposed to do. If you design a license that specifically makes it impossible to comply with the license because you have to, don't forget, SS Public License requires that every single piece of software involved in the stack on your computer, derivative work or not, has to be under the SS Public License. And no one can actually do that in the real world. I have yet to see someone who is licensing the SS Public License, licensing outbound on it who won't take it. As my colleague on the panel here, Richard Fontana, said years ago, inbound equals outbound is the right way to design contribution mechanisms. And that's not what the SS Public License is being used for. So I just think it's not even worth spending as much time as we've already spent on it. Yeah, I mean, it's in one sense, it's an interesting issue of language definition. But then once you decide where you stand on it, if a Copy Left license is defined as a free software license or whatever, a Libre license or open source license that has the features of Copy Left, then if you accept the view, which I think is the consensus view now that SSPL is not a free software and open source license, then that answers the question that it can't be a Copy Left license. But beyond that, I'm not sure it's really a very interesting question that it's really the policy issues about why the license isn't a free software license is the important question to think about. Okay, so we are now at the end of our Q&A session. We're going to go into the, what's the hallway track, but the live room where you can all join us and just see if you'd like or interact with us via text, the room link will be provided in the channel. But I want to take this opportunity to thank everybody and to say, clean your room. No, I mean, to say thank you for joining and also leave it to my panelists to say anything else they might want to say. Last words. Thank you. Thanks to the first time organizers. Thanks for attending. It was great and hope to see you next year in person. Max. I can only second Alex, that has been great. Thanks to my co-panelists here and to this co-organizers has been a really good experience. But yeah, we noticed time zones matter. I hope we can do this in Brussels live.
The organizers of the Legal and Policy DevRoom for FOSDEM 2022 discuss together the issues they've seen over the last year in FOSS, and consider what we can learn from the presentations on the track this year, and look forward together about the future of FOSS policy. The organizers of the Legal and Policy DevRoom for FOSDEM 2022 discuss together the issues they've seen over the last year in FOSS, and consider what we can learn from the presentations on the track this year, and look forward together about the future of FOSS policy.
10.5446/56841 (DOI)
Hello, my name is Masahumi Ota. I would like to talk my experience, how to teach open source licenses and compliance at a Japanese university. This is me. I have started teaching open source licenses and compliance at a Japanese major university since last year and used to contribute many more open source projects over decades. Now I have contributing and Raspberry Pi project with Raspberry Pi Foundation and Raspberry Pi training in Asian area. This is agenda. It was first time to teach open source license last year. Your feedback and very welcome to improve my lecture this year after my question. The opportunity to teach open source licenses and compliance is why I have a chance to teach open source license and compliance at the university. It was students request. I had returned to this university as a student in three years ago. It was really great time to study computer science. I have a chance to talk some staff who used to help me studying also. After graduation, I have a chance to talk the staff at the university. In two years ago, they are looking for the lecturer who teach open source licenses and compliance. At the same time, I was annoyed with many questions about open source licenses on Raspberry Pi OS because Raspberry Pi on business now getting increases. I had to have many time to discuss the licenses on if operating system. I have also discussed with Raspberry Pi training guys and invested many things by myself. As a result, I have some more knowledge to cope with some of the incidents. Therefore, I could teach students how to deal with it, I have applied lecturer via the staff. Laterally, there are many violation incident of open source licenses and now getting increased. Especially, we can see the license violations about Mastodon which has AGPL licenses. Some of you know it is familiar to use by Trump. As violated AGPL licenses, software freedom and similar incident violated Mastodon AGPL, free new had to disclose their source code of modified Mastodon aligned Twitter features because of SNF demand. This is really important for business. Many of enterblades company actually trained their employees how to deal with OSS license in the business and getting nervous dealing with it. I am really sad some of people lack the expect to under open source software authors and creators because they work very hard to keep open source projects. They are colors and fake incidents in Node.js but it is only erratic behavior but he want someone to understand his hard work and respect him. I saw the similar case in Open Flare project and he contribute and his hardware spending much more money but he ask to donate to many more people but he can't gather much more money so he close his open source distribution and he has been claimed many more open source users and it is something same issue in this color and fake and color and issue. I think the expect is a very important element to keep open source licenses. Here are the similar distribution software like operating system based on same inherited software it is almost the inventing of real that may hold the software violation traps. If you realize your interact is as the invent of real please check carefully not to meet the violation. The difficulties to teach several OSS licenses. First of all I felt difficulty to teach several OSS licenses because I have read license description directly I am sure and that student feel the difficulty to understand the licenses. They felt complex to them especially in GPL I need to use approach it teach with instance of life is violation and use many quizzes with tentative instance. Finish of automatic copy right and basic element that students may judge whether they violate any software license or not. I should teach what they are in detail and the same as open source software license because of and it needs and the those element to judge with the license. The lecture last year the lecture last year it was the first time to teach. This overview held and 29 May last year 90 minutes lecture 90 minutes workshop within IZUGA. I felt 100 minutes might be long homie at first though it was actually short time to cover all open source licenses and compliances and it was annoying. I was annoyed to almost people were online under the community situation. I taught at the classroom teach though the lecture streams via Zoom will fluctuate to communicate. I felt the difficulty to teach under the hybrid environment because I checked the face to the student find my presentation and lecture going. But I can see the student face via Zoom so I feel some difficulty for that. This is lecture side agenda last year. I needed to introduce what software license are how to approach with law. I should learn the licenses principle of private autonomy, low priority and copyright because looking into open software licenses and compliances. They those are really important element to understand whole view of software licenses. I also taught license management by vendors that we need to check the rules in the license to learn the license deeply. Because on the property software and the license based software has a special rule in the licenses. So students should learn the license and set it by authors with free. Learn something like description and rules for that. Learning licenses and the proprietary license are also important to check its difference and the OSS utilization is really important. Learning with open source licenses especially at GPL when you are able OSS based software. The word is utilization and really important key when using open source software. Use under almost open source based software is modified by users. For example you know in the open stack and the launcher and some cloud based software is modified by users and the line modified and the launcher and the software delivery CACD system and open stack is based on the modify. So the utilization understanding utilization is very much important to understand use of manage open source software. I have tried the quiz and mini workshop to understand the kind of open source and with gamification but LGPL and LGPL will difficult to understand students immediately. So I set them an appendix to answer some videos to understand. I wanted them to read after my lectures and waiting for questions via flag. There are many more issues in LGPL and AGPL but it is very complex and the incident now for students cannot understand immediately. So I use more videos and explain words to understand afterwards my question. This is a workshop-side agenda. I have three tentatives a case with so that students understand open source licenses. I introduced how to work with others. Legal department, open source goals, open source authors and analysis tool in the first stage. Second and third cases I did not give some hint to students to solve them. I wanted them to solve by themselves the second and third issue. This happened and often happened and you know someone emailed to ask you open your source code according to open source licenses. So I wanted to solve at least case 2 by themselves. It was also trial and error. My lectures and trial and error I have read some books and train talks by Japanese companies about open source licenses and compliance. I realized it was really important to use the violation incident approaches and use the license check tool to understand how to deal with open source licenses and the compliance is. I checked many more enterprise company docs and they trained especially on the development department to understand how to deal with it. Because they have to modify and open source software in daily business. For example, they made software with open source software so they should know how to deal with software license in their software. Actually I have joined some open source project. Some people don't understand how to deal with open source license in the project. I have discussed how to teach the students before my lectures with the staff. They gave some advice to their experience. Gaming equation is really important element to teach the students. It makes interest learn more and more. They also say in the only lectures may be more by students because it lines a complex word and more. So they say game equation like quiz and like workshop is really important to understand how to deal with. Feedback to the students. Fortunately the students gave me a good feedback to me. Thank you very much for the feedback to the students. They say and good approach to the students. But sorry for them. I could not have time to explain and play for them so that they understand how it works. But almost people get feedback. They understand the relations between the license and other and law and copyright. Some of them ask me to the license issue related to IP. It is yes. So IP is really important. So it is very much binding with the licenses. So we should check and when IP, when they are looking into the IP. So and also by these licenses. Lecture coming this year. So just thinking of next year. Sorry just thinking of this year. And the lecture last year and it is the first time to teach. And I think it will succeed. The coming this year and I have been discussing the stuff. How to improve my lectures. I now gathering more and more and more and more and more workshops and I am planning to make more. So I should have more and more time to consider the infinite workshops and it is a game kitchen to the students to understand how to deal with. So I think I have more time to plan and this year's lecture. So any idea and if you have any ideas I am very welcome to improve my lecture this year. Thank you very much. And the profession. So we are the fund of a Linux foundation giving some advice on how to teach open source license with his book. In my the fund and stuff on the university give me a chance to learn some advice with the whole students. Thank you very much for your people who helped me and give some advice with. Thank you. If you are attending my session if you have any questions please chat me afterwards and if you have any opinions and give your advice to me please feel free email, tweet and chat and with me afterwards. Thank you very much for your time. See you next week. See you. See you. Bye.
I have started to teach OSS licences and compliances at a Japanese university since last year. it was difficult to teach OSS licences and compliances because I should have shown many of use cases so that the students could learn what the licences and compliance are and more, they even didn't know how to deal with ordinal software licenses. So, I had to talk many of the background knowledges to the students as an 'introduction' so that students could understand the OSS licences with compliances with ease. Fortunately, there were good feedbacks from the students, though, I need to improve my lecture more gathering more use cases etc. I will give a talk my experiences there at the university and discuss how to improve my lecture to teach such 'licences and compliances' to students. I will give a talk: The opportunities to teach OSS licenses and compliances: - increase the incidents of OSS licenses violation - keep compliances not to violate OSS licenses - respect the authors and keep the licenses rules. - reinventing of the wheel with compliances the Difficulties to teach several OSS licenses: - the difficulties for students to read several OSS license descriptions directly, really complex for them, especially GPL licenses. - need to gather many of the incidents of OSS licenses violation. - need to prepare good quiz(exam) so that students could understand with ease. the lecture last year: - why the University needs the lecture - the trial and error for the students - the advice from the staff at the university - the feedbacks from the students the lecture coming next year: - gather more incidents to check - looking into the incidents more - deep-diving for the practice. - looking into new OSS licenses for DX - any ideas...welcome!
10.5446/56843 (DOI)
Hello everyone, my name is Ricardo Mendoza. I come from a company called Pantacore and today I want to tell you why we think that embedded Linux needs a container manager that is written in pure C. First of course, who am I? My nickname is Rick M.M. I have been involved with Linux since the early 2000s. There was a time, also early 2000s, when I got close to the MIPS architecture. At some point I think because I got obsessed with porting Linux on a bunch of early hand LPCs. This is before smartphones. These devices were quite cool and they used to be MIPS early on, some of them SH before ARM came and took the whole pie. Some of these MIPS and then later ARM work led to a drive for understanding low-level OS concepts. I ended up becoming a Gen2 developer as well for a bunch of years, maintaining a bunch of MIPS hardware. I no longer do that. And then I worked at Canonical for five, six years, something like that, working not on the main distro but actually on the device strategies and device side of things from Canonical. For example, early day testing of Ubuntu and mobiles and tablets, which resulted in Ubuntu Touch or Ubuntu and mobile as it was called back then. Now it's called Ubuntu Touch thanks to the community effort that still maintains this, which I'm also a part. And after that I was part of the team that wrote about Ubuntu Core. That was the end of my relationship with Canonical. After that I left on the most part to find what we are doing now. I am one of the founders at Pantacore since 2016. So been working there for a bunch of years now. And as a side note, I also sit on the board of the Ureports Foundation, which advances Ubuntu Touch with a pure community effort. Now a little disclaimer. This session is not a tech deep dive. We're not going to be running tutorials for you to follow up on your laptops. But rather is a conversation provoking exercise. The idea here is to discuss the topics at hand. And of course, also at the end, there will be a shameless blog. I will tell you a bit about the container management engine that I wrote for embedded. Now the important thing here is not what is written or what did I write. It's actually more why was something written in the way that I did and why do we believe that a container management engine in pure sea is very important to the modernization and the advancement of all the great work that this bunch of people here do. At the end of the day, what we do powers the infrastructure of the world. There is everything that people use, all the gadgets, all the things that makes people's lives easier. It's thanks to embedded Linux. So first and foremost, whales are too big for embedded boards. Why do I say that? Well, it has been a huge push from the cloud, from cloud vendors, cloud people, cloud developers, whatever you want to call it, about bringing the technologies that have made sense on the cloud and how to fit them one to one in the world of embedded. You know, this is kind of weird because there's a bunch of people that have not a lot of experience, let's say, on what true embedded means, telling us how we should be using certain tools that have made sense for cloud. And that's why I have this image here, you know, a managerial decision and an issue that happened at some point meant that these people had to solve a weird problem of putting a square into a hole. Over there it worked and they were able to bring their astronauts back, but that doesn't mean that we should always follow that approach, square peg into a run hole type of thing. So that's not to say that containers are not useful for embedded, on the contrary. Containers have modernized cloud computing and we think it can do the same for embedded. We're seeing it already. We know the usual benefits of containers, efficient resource utilization, modularization of software, portability, security, practices like CI CD pipelines become straightforward when you have this type of modular software architectures. And you know, Docker containers and OCI containers and the like have been perfect for the cloud where resources are near limitless. How does that work with most embedded Linux devices today? Let's see. Well, first the question is how is the cloud different from embedded Linux devices? This might sound like an obvious answer to you guys, right? But to a bunch of these people that are investing tens of millions of dollars in marketing budget to tell us run Docker on your tiny device, I think it's not obvious, right? So let's say it here. A data center has near infinite resources, right? A server is, you know, you won't lose power. It has infinite connectivity. It is resilient. It has a technician next to it. If something happens to it, right? And it's not truly mission critical, right? With the current type of cloud architecture, things are very ephemeral, right? Something dies somewhere. It will just spawn again somewhere else, right? Infinite replication means that, you know, if infrastructure fails somewhere, it's not a huge issue, right? It is an issue. Don't get me wrong, but it's not a huge one. Now tell that to your mother's router, right? Your Wi-Fi router in her home. If it goes down, that's it. No more Netflix, right? That's to call the technician at the ISP. That might involve a call. You might be without connectivity for 24 hours. That's mission critical in a user's home, right? I'm not even going to get started with robots in a factory. I mean, I'm pretty sure you can think about that one yourselves. But, you know, for the common user, it's also mission critical, right? If your thermostat fails, well, that's bad, right? You freeze. If your fridge fails, that's bad. Your food goes bad, right? So embedded is mission critical. Embedded requires extreme resilience, right? And usually the resources are extremely constrained. So, you know, this is basically what we're saying. We can agree that maybe your minimum specifications for an embedded Linux device look something like this in this current day and age, January 2022. This evolves. Maybe last year I would have said 16 megabytes of Nor Flash. Some people would be doing. But for example, we can see a bunch of routers nowadays that are going to 128 megabytes of NAND storage. Some of them even have EMMC. But still, most embedded devices aren't the lower range of this. A lot of them we don't have a flash translation layer even, right? Our usual architectures, we know them, ARM, MIPS, RISC-V. And virtually, of course, I say any Linux distribution, but in reality what it means is whatever goes, right? Whatever you need to actually build your system. You want to build something atop a yachto, open embedded, or whatever, go for it, right? You want to use a Alignar reference build for some board. Go for it, right? Or let's say you want to make a router, most likely you're going to go the way of open WRT, right? So now, going back to the topic, right? Why do we think that with this type of resources or why do we think that containers are important for embedded, right? Well, if we do away with the problem of do this cloud tools fit, right? We remain behind with the benefits. We say, well, modern software development and firmware lifecycle management at speed, right? Modernizing the way that we manage the software on an embedded device is important, right? We have been doing it the same way for the past 20 years, 25 years plus, right? Maybe it's time to change, right? We're starting to see the same problems that the cloud was seeing six, seven years ago, right, when servers and applications were just getting too huge, too complex to manage, right? So, what does that mean, right? I mean, what do containers help us do over here, right? In the embedded world that is starting to see that problem. On the diagram on the left, you can see what I mean by that problem, right? The embedded firm worms becoming this huge monolithic image, right? That just incorporates all those features. The bigger and the more complex that gets, the longer your release cycles are. This prevents valuable and fast innovation, you know? A monolithic release is time consuming to produce, it's error prone, right? And you know, the two above points result in that some devices end up being very stale out there in the field, right? They're not updated. They can be very insecure and vulnerable to attack, right? This creates incredible surfaces of attack. We have seen it many times. You see some ISP in Germany a few couple years ago ended up having over a million routers compromised, created a botnet that actually required, I think, a technician visit or sending the box back or something like that. Imagine the cost of that, right? But you know, that's the sort of catastrophic scenario. But on the normal day-to-day business, just those long release cycles that prevent innovation prevent you from adding new features fast enough, right? So we need to start doing what this diagram on the left is doing. We need to start sort of releasing the pressure of that monolith into a feature-rich pipeline. We need to start releasing so that it becomes modular applications that sit on top of a smaller core, let's call it, right? And to achieve this, well, the easiest way, of course, is with containers, right? Let's not try to reinvent the wheel, right? Let's just make sure that our wheel fits our use case, right? However, that's the main point here. Everybody is telling you use Docker, use OCI runtimes, base solutions. But why? You know, embedded wasn't built, sorry, Docker wasn't built for embedded devices. Most of these OCI-compliant runtimes out there wasn't built with embedded in mind, right? So we propose a different type of architecture that doesn't mean having a huge OS and then on top of that, a container runtime and then on top of that, container applications, but rather a shift in paradigm, right? We're true proponents of something that we like to call the minimal container runtime, right? I think that the legacy, let's say, Docker type of architecture where you have your hardware, or a host OS on top, a bloated host OS, then a Docker engine and then on top of that, your business logic is just too much overhead, right? It's too much for the majority of low spec embedded devices out there, right? And, you know, embedded nowadays, it's about more features, but it still is a well-defined, it still is about a well-defined feature set. The operator, the device manufacturer, they want to extend the features, but they want to be in control of those features, right? So do we even need a main OS, a big host OS sitting below all of everything we just mentioned as we know them in our true embedded use case, right? Is that necessary or do we just need to modularize the actual software functions that are sitting on this device, right? And that's what we propose, right? We propose doing away with the concept of a host OS. Why should there be a host OS? Then we just have a very tiny container runtime that sets up one too many user lands running on top of it via containerization tooling and technologies of the kernel, right? In the diagram on the right, you can see what I mean, right? There's one that says host OS, but in reality, this is just another application. It's just another user land that might start one too many services inside of a container that provides some functionality. For example, networking, whereas another one of these applications might provide power control that's safe for a, I don't know, a fridge or something like that. Now there's a little text here that says hint, in it user namespace, right? I have always said that this is an evolution of, or rather an extension of what the kernel does by default, right? On your left hand model, that host OS is also running in a container for all intents and purposes. I mean, you can extrapolate a little bit from some concepts of the kernel, but your main OS always runs into something called the init user namespace. Oh, surprise, the kernel is actually running containers for everything. So what we're doing here is extending that concept and saying, okay, why don't we use that init user namespace, that main namespace just to provide a very minimal container runtime and then run all of the actual relevant software functions on top of it. This is interesting, right? Because this lets us do what? Let's us manage the entire lifecycle of every software component on the device in a very straightforward manner. It lets us decouple the lifecycle of all the higher level software that is running on that device and the firmware that is running below that high level software, but through a common protocol. Now, if we have a minimal container runtime, well, we have to decide what to write it on, right? And this is where it gets interesting. This is where you start talking about, or at least having the conversation as to, is it what the cloud vendors are telling us or is it something else? We think it's C, right? So a minimal container runtime, YC, right? Well, first of all, most of those cloud IoT proponents out there with their whole 3.7 seconds of experience with embedded have been employing a lot of web developers to solve the challenges of embedded. But these are web developers, right? And they have been telling us how embedded should be done. Okay, well, that's a bit of a brute force attempt, let's say, right? But in reality, we have heard it is not uncommon for these people to tell you, manage your embedded firmware with an OJS or GoLang Ridden Engine on your 32 megabytes of NOR flash, or maybe 128 megabytes of NAND, come on, right? We think that a minimal container runtime should be considered baseline infrastructure. It is much closer to a role of the kernel than that to user land applications. Yes, okay, of course, the details. Let's go back to my conversation about in-user namespace. We think that it should fit anywhere. It should be as portable as possible. So why break, why change what is in broken, right? Let's write it in C after all. Like C is pure C, and this was the true first container runtime, let's call it, NOR engine, right? So what have we done? This is where we plug into our story, right? And in Pandacore, we have built something called Pandavizer, right? Which is an in-built and lightweight container manager, right? We built it with the needs of embedded in mind at the center. That's all we care about here. We're talking about a container engine, a minimum initial RAM disk that, you know, an overhead of no more than one megabyte, let's say, with strict build rules that allow you to configure and change all of those things according to the requirements of your device. It is specially suited to small, NAND or flash type devices, but of course it can scale the way up to huge things like a Raspberry Pi, right? It is highly portable. It is built with C, using a pluggable runtime architecture. And the main thing we use is actually LXC containers to run each one of the individual payloads. Remember, Pandavizer is in charge of the lifecycle management of all of those containers, right? We're not reinventing LXC. We're just providing the right scaffolding around it in this particular case to manage the lifecycle of your embedded devices. And Pandavizer is a single purpose system. It doesn't try to be a full user land or a full OS. It doesn't let you configure your networking or figure out all the drivers for your screens or whatever. No, the only purpose is to orchestrate the lifecycle management of your device's user land, which would be in that case that main OS, which is containerized here as well, as well as other application containers, but also your BSP components, your kernel, your firmware, your modules. By providing a single way of defining your entire system, then you can modularly manage each component of that system. And it is fast, right? This is just containers. As we said, it's a one megabyte C program that runs before anything else. And there is no true performance hit, right? But you get all the benefits of modern lifecycle management. So what does Pandavizer let you do? It lets you convert, turn your monolithic embedded system into a set of portable and reusable microservices, right? Reuse the code across different projects, make it in a way that those dark boxes at the bottom, BSP and hardware, can be interchanged while the things on the top can also be interchanged. Pandavizer is the one that then has the knowledge of who are we running on and how can we bring certain applications to run on top of this BSP, right? This is very powerful because it lets your product teams, it lets developers, it lets everyone actually decouple the lifecycle of those business logic units that are written for these devices, for these embedded devices, and decoupled it from the lifecycle of the firmware components, your BSP, your kernel, your modules, your actual static firmware, right? All of this is achieved with something that we call the Pandavizer system format, which is a way that in a single file, every component that is needed to make, you know, from the kernel to make a device operate is defined and described. In this case, we describe the components that make Hothub board to port package, you know, your kernel, your Pandavizer binary itself, which is your NDRD, right? Configuration for those components, as well as every container that is running on top of that, in the very basic use case, you only have one container, which is running one level up from the inner user namespace and would actually become your main userland or your main OS container. Now in parallel to that, you can add other applications, or it could just be a set of functions, you could have a container, which is your networking function, an application that takes care of screens and an application that takes care of power, right? I understand all of that is relatively high level, right, just to say it with one slide, but it is the type of world that we're trying to define. All of this is, as I said, it's a single project called Pandavizer. It is written in C. It is extremely lean and it has truly been made, well, by embedded engineers, for embedded engineers, right? So I invite you guys to take a look at what we're doing with this project. You can go to pandavizer.io to learn more about it, to try it on different devices. If you have any questions, want to talk to the team, have any suggestions, you want to change anything, or you want to tell us we're completely wrong in our approach, or not, please, you can go to community.pandavizer.io and share your thoughts. You'll find that just like you guys, we're embedded developers by trade and we would like to talk about this with everyone in this community. So thank you guys for taking the time to come to this chat. I hope some of you will stay afterwards for a bit of Q&A or just a friendly chat and see you there. Thank you very much. Let's kind of wrap a couple of them up into one question for you. So we had a question about your target memory, which you kind of addressed in the slides after the question was asked. So can you talk about target memory and why C is the solution versus Rust or TinyGo, which were also suggested in the comments here? Yes, absolutely. Let me actually mute the other time. Okay, there we go. Okay. So first, thanks everyone that actually joined the talk and thank you for your engagement. So I think there's two questions in that one, right? One of them is the size requirements themselves and the target size constraints, let's put it that way, resource constraints. And the other one is YC for that type of target size. So first and foremost, when we started this project, the main driver was to support containerization on Wi-Fi routers, set up boxes and that type of hardware, right? Which five years ago when we started, it was very average to find routers in the 16 to 32 megabytes of Nor Flash range, right? No Flash translation layer, no, nothing. Very static monolithic firmware. A lot of us here are used to that. But those vendors we're working with, they wanted to add horizontal feature extensibility and they kept getting pitched. And this is my second slide. They kept getting pitched by big cloud companies or some of bigger, higher class, let's say IoT vendors, use these high level solutions that just don't fit. And then the conversation was always, how do I get through your skull that 32 megabytes is too small? And the other ones would be like, it's fine, you can figure it out because they were sales people, right? So target original target was 32, 16 megabytes of Nor Flash. In reality right now, we're finding much more that the market is going, at least on the Wi-Fi set of boxes and so on, 64 megabytes of Nan Flash, 120 megabytes of Nan Flash, which is still on the low end. Because in reality, the base firmware, base OS that might be running in there, you know, might take 20, 30, something like that. There's 80, 70 more megabytes to play with to actually deploy dynamic services. So that's the baseline target sizes, let's say, right? And from memory point of view, anything over 64 megabytes of RAM, we'll get this working fine. And the wiser fits in about one megabyte of storage footprint. So that's the overall target. Now YC, and this is a question that Sigmund, I think, said many times. Sigmund, you're right, this could be written in Ros, it could be written in TinyGo. The choice of C was more because of the market dynamics of the deeply embedded ecosystem. And remember, sometimes people sort of put embedded as this sort of big umbrella term. But people that are talking about Raspberry Pi-class devices up, that's not embedded. That's very big, right? At least in my mind, in my world, right? But I'm talking embedded, I'm talking about these things that have no AM and C, no Flash translation, single core, 64 megabytes of NAND Flash, no more, right? The market dynamics of that lower end are a lot more complex, like engineers, developers, architects, or they're just used to working with C. And this is a reality, right? They're used to working with C, you know, all of their systems work with C. So that's the baseline of the statistician. Can there be ROS versions for this? Absolutely, right? So another question we got was from Thelman. Can you give me an idea about boot and startup times for your system as whole reaching app startup? Yeah. So we utilize several different things that help us reduce whatever impact we might have on the boot time. There is a minimal impact in the order of maybe 15 to 20 percent more of what your normal boot time would be. I mean, an exact seconds number depends on the board, depends on the system, depends on what we're trying to bring up, right? But on the baseline case of just having your normal, let's say, user land containerized as that sort of host OS container started through Pantelizer, the time hit might be in the 15 percent time frame, optimizable with certain things that we can do, right? Pretty well asked, how is administration, what interfaces do you use to deploy new services? Right, right. So Pantelizer itself, it exposes a local API, it exposes a local socket based API, Unix domain sockets that any container within the running system can leverage to control the life cycle of all the containers and applications that are running on that system. So in reality, anybody can write an agent that controls those APIs, right? Or they could consume directly the systems that we provide for these type of solutions at commercial grade, right? This basically means Pantelcore Hub, which is the management cloud for this, which we provide, could be used to use that one to one, or you could develop your own. A lot of our customers actually develop their own because they have existing management backplains, right? Maybe you answered this already, I'm not sure. How would this compare to system D and spawn? Right, so system D and spawn does not provide a big scaffolding for life cycle management, right? System D and spawn does a very similar thing in the sense that it starts up a container, you could say kind of, right? It's the primitives for a container. But the life cycle management is where it's really important for this type of unattended mission critical devices, right? So what we have combined is that the minimal container runtime and the life cycle management engine, both of those things come together to provide an entire product solution. That's what we try to bring to the market here. So Ruben asked, is there a method to describe which combinations of container proteins are compatible, i.e. tested? He's concerned about a risk, but go ahead. So this is a story longer than, you know, older than the Bible, let's say, right? How can you figure out a container across dependencies, right? The answer is there isn't a straightforward way, right? We just provide all of the metadata that is needed for you to understand what container is running and what version. What we have seen is that the CI-CD pipelines and processes, NQA processes of our customers, then incorporate that into their decision-making engines. What is deployed and stable release, candidate, canary testing channels and so on, right? As long as enough metadata is available, then you can keep a very comprehensive view of your deployment and the software that is coming from your pipeline, right? Which is what happens in the cloud, kind of. The cloud is not, this container depends on that other one, right? It's more about the DevOps practices behind it. So we have about 30 seconds left. I think we'll continue the conversation in the room if people want to keep asking questions. Thanks for all the great questions. Have you looked at the work done with containers on EZOS and C? So I hadn't looked at EZOS. I just saw it in the question before. I was trying to search for it, do it fair? So I'll look into it right now. Okay, fair enough.
Container technology has always been part of the cloud domain, and as such, its roadmap has usually been dictated by the use cases and requirements of that world. In the servers’ domain, resource utilization is nowhere near as relevant as it is in the embedded domain. The different languages and technologies that power the tools and mechanisms through which containers are leveraged in the bare metal server and /cloud worlds just don’t fit into the requirements of embedded. Despite the above, these past couple of years have seen an aggressive push from cloud-centric companies trying to tell the Embedded Linux ecosystem and its development community that we should make do with Golang, NodeJS and similar solutions and tools. Most are unaware of the challenges when you cram cloud tools into a resource constrained embedded system. Even though the architecture of some of these frameworks have the right intention (LXD), most just lack the interest in understanding the specific requirements of embedded. In this talk, we’ll explore how using containers for embedded systems modelling can help facilitate development cycles by enabling modular software architectures. We’ll deep dive into what the real requirements of embedded systems are and how modern container technology can help us meet the actual needs of this world. And lastly, we’ll walk through an example with Pantavisor, an open source container framework implemented for embedded systems.
10.5446/56850 (DOI)
Hello everyone, my name is Mohammad Sarin and I am here to present OpenGiRata digital space and taking the guest study of OpenStreetMap. I belong to a startup that is called OpenGiScience Research Lab that is in Yellow or some Netherlands which I am a leak researcher of. So I am presenting today a small part of a larger project that is called OSN-Retopia where I am trying to define how the world map should look like more ethically and not taking into account the modelled or finalised approach and more on context and qualitative and mixed design approach. So for this presentation I would like to present that how OpenGiRata digital space is an important concept and how we can understand that, understand different aspects, social or non-social aspects of OpenStreetMap and GeoRata in general. So let's first go through what is a digital space, before that we have to break down or set free space and digital. Space is where the physical space and activities takes place and experience takes place and where we can see objects from our physical eyes and that's what I find from my research project and digital is anything that is in space and then you put it on the digital screen for analysis or for any other reasons. And digital space is where the activity that is still happening and that causes the creation of that digital and specifically maybe at the maps. And this is why it is important to define digital space in GeoRata and when it comes to OpenGiRata it should be exaggerated to define or to understand that, not to define actually understand first. If we define it it will be really a generalized approach. So I will propagate that it should be understood before it can be defined even. So whether the digital space in OpenGiRata exists and how it exists that I will explain in next slide. So in summary what I said here is that there is an analog. Analog means what is on the non-digital side like buildings. If I consider GeoRata then analog would be buildings, roads or anything in particular that can be converted into a data and converted into a digital data. And now it says definition of analog data and digital data that I will not go into depth and still I will have an answer or understanding of that that what is digital data and what is analog data that is also a form of debate that we should also consider. So what happened is according to this understanding there is something analog and for example buildings they converted into digital and the data is produced and then the digital is produced sometimes digital is produced for example the media attention and other digital forms of gaining potential gains and opportunity building and then people gather up to create data for example migration crisis is forced into the digital and then into analog not particularly asserted migration crisis. Some migration crisis is caused by the digital, the digital media or social media and then it is sent to the analog to create data also in form of activity with the mappers or for the machine learning algorithms were created from the manual work and then this is converted into a digital data. So this is what I mean by analog and digital and sometimes it is always the loop. So from this we understand I also understand that this creates different data and especially when I said open data that it means that there are different stakeholders and different steps that creates the data. So the different steps that encourages the data to be produced is form of analog and it's form of space and it happens in the space. So for that reason we need to understand the data production that there is no one way data is produced there is no general way that it is produced. You should have to understand better how the data in different regions and different platforms are produced for because it will lead us to understand better ethical boundaries but because if we generalize certain definitions of better production or ethical boundaries that is only for the global north and the global southward suffer and research has shown that as well and finally the different in data there will be a different data quality measures and we have to understand that what data quality means for different. For different regions and for different platforms for instance I'm taking open street map so that's probably about what is open street map. So open street map is the map of whole world is freely available and open to anyone that wants to contribute to it. So what happened is that these lines and these are lines and polygons and multi lines and features that's called features and these are considered as something that is digital and it's created through machine learning or artificial intelligence algorithms or by manual mappers that are clicking the data into the open street map but from our understanding these lines these points these objects does represent does not only represent polygons and lines but it represents something it has some emotional and and and and steady value and featured value and history to understand that how it was produced whether it is produced is it was it a better data when it's produced of course when we look at the measures from global north it represents that it is a better data. So from my perspective these points and polygons have a certain value and has been created with when some certain activity happened or certain certain certain certain attention were taken into account and then they are created they're not just created out of nowhere. So this is needed to understand that when these activities take place why I am asking this question is because to know what are the motivation of the mappers and what happened in the space that causes something to change. So we need to understand that space causes that digital to really understand what actually happened with data production and what ethical boundaries should be taken care of while producing that data. There are various explanation of that activity. So let's go through the activity approach. There is an activity approach and there will be a motivation approach and there is also potential gain approach. Then finally we will look at it to the users. So activity approach is when does the data is produced. So the data is produced when disaster happens. So activity approach relates to when is the data added. So when such activity took place like for example disaster and event and crisis or opportunity building for example commercial opportunity like Uber and Facebook is using the data of open to data and producing their products to their users. And media attention. There is a huge media attention about refugee crisis, huge commercial opportunity building came into being and from digital it becomes analog and then it can become a lot of data. So this form is only still limited that maybe there are more activity approaches there. Second is motivation approach. So usually these approaches are interconnected but not all the time it should be interconnected these are just categories. So when disaster happens there is a disaster map that would create them or motivate them so and add the mappers and mapper and attenuistic wants to maps and then they map the entire area. Images mappers are also mappers that map their own area. I am not going to detail about this, this is not the property I want to discuss more. And then there are career mappers that wants to improve their career and to learn skills and commercial actors that are adding data for their own commercial users, for their users, for example Uber and Facebook. Sometimes this also can be considered as a potential gain. Why I am asking this is because this is motivation and potential gain to be segregated somehow because sometimes the potential gain is something else and the motivation is something else. For example the case of foxes bazaar that I presented in my previous presentation I would like you to see that. I will send the link as well that in foxes bazaar the data has been produced by disaster mappers, not disaster mappers, by humanitarian crisis mappers, crisis mappers let's say and their higher motivation is humanitarian while the commercial actors is using it for the potential gain and using the data to present it to the decision makers of Bangladesh and using it for repressive refugees control. So this should be differentiated in certain cases and it doesn't need to be generalized in the first place. And then because this varies with the usage because potential gains are the usage of the data in digital world by creating these spaces. So these potential gains are mainly monetary and through the value that is added to them is whether they can make money or not. If we remove the money then it will be a motivation for the cause. There is always incentives, this is really complex. The potential gains and motivation is complex, homogeneous and pure image. There are various ways of looking at the differentiation between them but when it all comes to the usage. So you have to understand the quality, you have to understand how is the usage. So I am not a fan of or have a provocation that quality is not always what end user required. As I mentioned in the previous case study that some case studies that your end users needs repressive control for the refugees for instance. This is a really huge topic in the data science now that whether the data of refugees should be shared or not. So this is really a provocation. So high container collusion that quality is not always what end user required and it has continuously been treated like this and this provocation should be taken into account that it is not always what end user required. And the quality is how we define it should be according to the usage and ethical boundaries should be maintained for its usage and it should not do any harm. So far how this our digital space looks like. So it is an activity, certain activity happened, disaster happened and then there is a motivation or potential gain, except in actors using multiple actors they take charge and reduce data and then there is a usage. Risk analysis, getting there into place, routing everything. So the world ends up in the usage. Our digital space looks like this and it is always the change in some cases overlaps. It is not a model, it is just how we can develop certain understanding of it. But limiting approach to this is one of the feminist research that says that world looks different if you start from the very, very, very understand. So if we start from the activity towards the usage, the understanding that we develop about the digital space of certain case study would be different if we start from the usage towards the motivation and the activity. So this needs to be understood as well that if we design, if I am not, I am not a model designed, so this is not a model approach, this is not a model thinking. So this is how we can understand how what's happening in the data production and it varies to different cases and with different case studies and with different platforms. So with this understanding, we will get different with different understandings from it and we understand things quite differently. That's actually my one of the hypothesis I'm using and I'm not trying to limit the approach only in one direction by moving it both of the directions. What understanding can get from both things? But the limitation of course is that there's very few different reasons and approaches and if we get one understanding, we cannot apply that understanding to anything else and it's really qualitative and to some extent mixed design and transdisciplinary because there are various disciplines involved in it like stakeholder analysis, stakeholder management and GI science and social science and philosophical science and I think about that object represents something. It's about coming from object oriented ontologies and that objects are living things or not and space in the space does only the living things can add into digital or non-living things as well as both can add to the digital. So it varies with different reasons and with different approaches. So finally, what we will get out of this? You will understand how the data is added in multiple ways. There is no one way to add data of course and there are multiple activities that happen and how they differ from each other and how we should not generalize the data production and data quality definitions. This is how we should understand instead of defining it. But defining it is a quick way for practitioners which is right but there is more to do the data production than just defining it and analyzing it. So because it can cause certain harm as well because when we are using the disaster mappers to add the data for a refugee crisis, their motivations were different while the potential gains that were getting from the commercial actors were different. And it was causing harm because we didn't have any understanding. We were just taking into account what the end user needs and finally that would reduce the data justice. For instance, in Haiti case, data is only added when there is a disaster. Before the disaster, nothing was added. So we need to understand that what attention, what media attention and other attention caused certain areas to be mapped and this time it is not taken. Let's call this data injustice. Thank you very much for attending the presentation. Please reach out and one of the gains from this presentation is I'm looking for people participant to answer my questionnaires to understand the digital data, the digital space better and also if possible I would like to invite you for an interview. Please reach out if you're interested. Send me a message on Twitter or LinkedIn or anything. I would be happy to come with it and it would be helpful for me to understand different point of views on digital space and in general, a positive map. Thank you very much. Wehmat, please tell us more about yourself and about your background and how you are connected to OpenStreetMap. Okay, thank you. My background is like I have a master's degree in spatial engineering from faculty of gene information science. My background is really mixed with civil engineering and also technical science and GIS but what I most like is multi-disciplinary to do stuff like for transdisciplinary and multi-disciplinary from the technical GIS science and also do some social science stuff and to combine them and to see how these two connect and what can we benefit from it. Usually I have a job from 9 to 5 that I do for money and from 5 to 9 I do it for passion here and for my startup and also on the weekends. So it's kind of a nerdy game for me to do it but sometimes I get a lot of interesting things discovered and so that's me. Okay, so I think your timetable is similar to many of people that are joining now. So they have their day job and then the hobby that they work on and how does this understanding of digital space help open geo-developers? Can you tell us more about that? This is why for open geo science developers we need to understand that their data is open and anyone can use it and their codes are open and anyone can use it. So it's important how this understanding will help that how different users are going to use that data and how they can make sure that their codes or their data is not causing anyone harm and any public harm and to understand more that how the value has been created for different people and different people creates that value in a different way because sometimes developers are in the global north and we are developing something for global south and we need to understand that there is a gap and understanding of the usage. So this will help for them to understand that gap as well and to make more ethical decisions. Not a problem but it's just an understanding. Yeah, okay, yeah, that's right. And do you think that this topic would be a good topic for an open street map working group or is there something like that already? There is not yet too much majority of this project and I would really like to present this to open street map working group and I usually attend these in those talks and this is definitely a topic for them and I would really like to work with them as to reach certain majority and for this talk I would like opinions and I would like feedbacks from the people that are watching now and will be watching in the future to what do they think about open street map and how open street map could be looking more ethical or open shooting standards to be more ethical in general. Okay, yeah, so maybe with a recording of your talk you can reach out to the rest of the open street map community who is not there today and go on with a discussion. And another question, how can people join the conversation and get in contact with you? Yeah, I think I gave my contact in the last of my slide and I would really like people to come and talk to me about what they think about open street map and what do they think that how the data was created and how do they contribute to open street map because I really want people to join this project and probably work in collaboration with me. I will really interested and they just need to reach out to me. I'm very open person and I have like routine that would match their routines because 5 to 9 or the weekends so we can talk and how we can collaborate. First thing is that I said there's something that is not attractive for joining my project or joining my foundation is that it's still non-profit so there is little to no money. So I do it for passion and what I really want to do is to create a better understanding of data ethics that is for GI science and because I think that there are a lot of problems that has been caused by generalizing the data and data models. So sometimes it's in opposition to the big cooperation so that's why it's sometimes I sometimes it's a bit tough to get funding so that's what I mean but even if you don't want to collaborate because of any of the reasons but I would really like to talk to you to anyone who is interested in talking with me for half an hour or just to fill a questionnaire that I have developed and that questionnaire will be developed like for after one week and please reach out in Instagram or Twitter or anywhere so yeah I can go back to the room and I can type my contact details because I'm really desperate for people to contact me sorry. Okay that's good so people can stay here we have a whole way room where people can discuss with you in person after your talk and on the submission page of your talk you edit also information so people find all the information to contact you I guess and we can chat it also in the chat. Okay anything else we could discuss we have some more time three minutes to go. So go ahead. Okay let's see. More questions from the audience maybe people can still ask questions but there are no questions there yet. Maybe like a question for the audience then if you're not going to ask me a question I'm going to ask you so maybe if anyone can write like how do they use OSM or what is OSM to them in the chat and I would be really interested in doing that. You are free to go on with a discussion in the chat in the geospatial deaf room that's no problem and can ask questions there and I think we have these many million open street map users all around the world and I guess that everyone has a different focus and maybe different interests in what to map and what to do with open street map data and maybe like this you don't have in mind that maybe with the data that you provide in open street map maybe you can harm other people. So that's good that you provided this. You started this discussion. Yeah this discussion is quite important before open street map gets exploded with data I think. It's exploded with data and there is first there is inequality in production of data and secondly there is then divide. Inequality is that there are no data in some regions there are a lot of data in other regions and then divide is like certain people from really developed countries starting to map countries in Google South and with their own understanding. So yeah that motivates me to develop this understanding better. I think open street map on the other hand is a way to bring the global popular community together and that's great to be able to map in a region where you don't live and the other way around you could map and.
Open Source Geotechnologies are developed using various socio-technical systems. Understanding of these systems can help us understand the genealogy of the data generation. There are various applications that transcend from the basic understanding of geospatial as technological systems. I am analyzing OpenStreetMap and trying to define its digital spaces to understand what forms of plurality exist in data production. Conceptualization of Digital spaces of OpenStreetMap is important in order to visualise how we can define ethical boundaries and to answer the underlying question of “what is quality?”. I am currently interviewing, sending out questionnaires to OpenStreetMap users and conducting texture analysis of previous conference talks related to OpenStreetMap. In this talk I would like to present my ongoing progress on the research project OSM Utopia and present the small part in which I would like to increase the support from different researchers and practitioners, which is conceptualizing OpenStreetMap digital spaces. Furthermore, In this talk I explain how I planned to explore the interconnection and interdependence of analogue and digital spatiality of OSM and different research paradigms that needed to be explored. Also, will share the current progress on the way of defining digital spaces of OpenStreetMap and how we can categorise different analogues and digital assemblages that form these digital spaces. Limitations of these methods are also addressed. End goal that is targeted is that the data quality for OpenStreetMap requires a certain level of rethinking.