doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/53564 (DOI)
Hi everyone, my name is Hasham and the title of this talk is What's Next for Teal, the type dialect of Lua. So let's get into it. I'm going to start by introducing myself. I am Hasham and I have been involved with free and open source software for a long time. I created the Htop process viewer, the Google Linux distribution, LuaRocks, the package manager for the Lua programming language, and most recently Teal, the Teal language, which is the topic of today's talk. And in fact, this talk is the part three of a trilogy. And a trilogy which started with part one called Minimalism vs. Types, which was my talk at this dev room at Fawzen 2019. And that talk was a look at the past of all the previous attempts of bringing types to Lua and back then what was my experiments on going around that topic. Around the time that I started to prototype, what eventually became Teal. Part two, which had the long-winded title Minimumistic type Lua is here because I didn't have a name for the language yet, so I was just calling it TL at that time. So you can imagine TL became Teal, which was a much nicer name. And at that point, instead of looking back at the previous attempts and the prototype, the prototype was already a compiler that was actually self-hosting, meaning that I was able to run it to compile itself. So at that point, I was pretty sure that the proof of concept has proven itself. So I was saying that, hey, you can start playing with this and let's give it a try and see what happens. So that means that at this point, now that Teal is actually a reality in a sense, now we can start looking at the future. So this is going to be a more forward-looking talk talking about what's next for Teal, but of course, I'm going to start by recapping and trying to tie everything together about what this journey of getting into a typed dialect of Lua was like. And of course, I need to shout out first and foremost to the fact that as opposed to last year's talk, now we do have an official name. But first, let's step back a little bit and talk a bit about Lua, because I think this is going to help us tie everything together in the end and also, and don't assume that everyone in the audience is very familiar with Lua. So Lua is a paradoxical thing because it is a widely used niche language. So whenever I talk about Lua, I either get two reactions, people go like, oh, of course, Lua, yes, sure, sure. Or they just go, I have never heard of it. So you might fall in one of the two camps, but do not despair because I'm going to do like a super quick recap. So why am I calling it a widely used niche language? Well, because main characteristic of Lua is being this tiny small language which can be applied into many different scenarios. So it has been called like the JavaScript of games because it is so popular in all sorts of game engines, big and small, from like huge 3D game environments to like small mobile apps and in like, you know, game consoles and everywhere. Like it's just in the gaming industry, it is pretty much everywhere. It is also super popular in embedded systems in the IoT world. It's used a lot for networking software as well. And it is used inside applications for developing a scripting layer for applications. And often then this results in the good chunk of the entire application end up being written in Lua. And you might have also seen Lua like embedded in things like VLC and Wireshark and those kinds of things. And also there is like Lua tech, which is this variant of tech now, which is entirely Lua scripted. And Lua is also the scripting language for Wikipedia, actually. So yeah, so some of the templates that you see in Wikipedia, they are actually scripted in Lua, which is really cool. So why do people use Lua? So generally it's because of one of three reasons, right? First because it is embeddable, which means that it is built as a library, which is like super easy to embed into your application. And by being an embeddable library, it is also very small and the implementation is also pretty fast. And there's even like an alternative implementation of the, with a Jet compiler, Lua Jet, which is even faster. It is extensible, meaning that the language itself is designed with hooks, which makes it easier to extend it and to adapt it to whatever environment it is embedded into. So embedding and extending kind of walk hand in hand. And the third reason why a lot of people end up involved with Lua is simply because something they use, some application that they care about already embeds Lua. So Lua is the way to have further control into that application or to mod it or to extend it in some way. So people end up getting involved with Lua because of that. So one important thing about Lua is that it is minimalistic. It is quite small. The implementation is just a couple of hundred K, but it is not minimal, certainly not minimal in the sense of something like, you know, fourth or something of that kind. Because there's a certain pragmatism to Lua and over time the language has grown. So the language does grow but slowly. And one thing I have observed over time is that features they are often added in a logical pattern. For example, for a long time people have asked for bitwise operators to be added, but bitwise operators only got into the language once 64-bit integers were added. So at that point it made sense to have like once you have like native integers, but because that before that there was a single number type which was like a 64-bit double floating point and once native integers were added then bitwise operators were added instead of like doing them in kind of a clunky way over floats. And another main characteristic of Lua is one of its models which is mechanisms, not policies. And that model has like pros and cons to it and the main one from the pros is that it makes it fit really well as an embedded language in an application because you can use all these mechanisms to build whatever policies your application needs. But one of the cons is that it makes it sort of not ideal for a pure Lua application in application programming because essentially have to like rebuild the world from scratch to get your environment going and also for ecosystem building because from people who are building libraries for application programming they're often like you know defining their own policies which not always play well with each other and having the language itself not define these policies for example like how to do object-oriented programming and those kinds of things that makes it harder. So the end result is that in the Lua world you end up not with one Lua ecosystem you know kind of like the you know the Python ecosystem or if you think of other languages like that you end up with several often kind of incompatible ecosystems like we could site like for networking applications in the open resty ecosystem or like the love 2D engine for games and like Roblox which is like yet another game engine which has its own world and own libraries and all of that. So you end up in that situation which for application programming in general is kind of like not ideal. So enter teal. Well teal is a new programming language which defines itself as a statically typed dialect of Lua so it is in a sense an extension of Lua but not exactly like taking one specific Lua version and extending it but especially since some Lua versions have different incompatibilities between each other and teal tries to follow mostly the latest version of Lua but the compiler itself is compatible with multiple versions of Lua and defines its own dialect of it with its own extensions for specifying types and adding type annotations and those kinds of things. So why teal? Well first because static types are good they're a great thing for programming the large application development those kinds of things like maybe not super important for you know small scale scripting but once you start you know doing application programming static types are good and I won't try to convince you of that like if you're already sold into static types then and you like Lua you will enjoy teal and if not I'm sure there are tons of talks out there which will evangelize on static types and I will not have time for that on this one. So you'll have to take my word for it or just take that as a working hypothesis for the entirety of this talk. So well static types are good but there are tons of static late type languages already why not make another one. Well I like Lua and I like working on compilers so as a personal project this was interesting to me already and the third reason which I think teal can be useful is that well people myself included as developer of LuaRocks and Kong a project which I have been involved for a long time for example well people do write quite large applications in Lua and at that point you know it will be great to have static types. So like Lua I think teal tries to be both minimalistic and pragmatic right so teal is pretty minimalistic in the sense that well the compiler itself is like a single Lua module like one teal.Lua file which in fact is generated from using itself from a teal.Tal file so that one teal file it implements a teal to Lua compiler and that single file it can be loaded into any modern Lua version from like Lua51 to Lua54 including LuaJit and it can be loaded into a project without any dependencies and it should just work and once you have that you can require that module load the package loader and voila you can start requiring all their TL files from that and the compiler would automatically compile them into Lua and load them and your application will have teal support from that just by including a single file. And as a language it is also quite pragmatic for one it is not aiming for you know type theory soundness in that sense and this is something I have already discussed in the previous talks of this trilogy so essentially it doesn't like it doesn't try to be perfect in the mathematical sense of type theory and in my opinion it doesn't need to be perfect it just needs to be better than not having types at all right it's important that it does not get in the way that it doesn't make programming like less pleasant in any way and also that you know does not make it more error prone and it helps you catch errors and that generally it's just a better experience. So far I've been pretty happy with it like working on the compiler it got to a point where I started writing it in Lua and eventually I switched it to teal and after I switched it to teal like you know not before or longer was like okay this is a lot better than the previous stage of development because now the compiler is actually helping me to work faster on development so yeah I think there is a benefit in there and at the same time in terms of language evolution like Lua teal is not afraid to add features when they make sense when they follow a logical pattern right so like the first premise of the language was adding static types and once we had static types the logical thing was for example to have explicit global declarations and well we added you know a keyword and we added explicit globals. Next thing was once another example when we added union types then we needed to discriminate on the union types to specify them so that led to adding the is operator which was an addition to the language which Lua originally didn't have so this allows us to do compile time comparisons and do some flow typing in there so we add things here and there when they make sense you know we're trying to avoid like gratuitous additions to the language but generally you know when they make sense in moving forward like it's mission of you know helping programming in the large I think it makes sense so where teal is now at first in 2020 we announced teal 0.1 then it was just wasn't even called teal yet it was just TL and in this year since well we have seen like a you know a small but growing community developed around the language started getting contributions in the compiler itself but also like around it like in the tooling department which is super important like a language is not only its compiler it's not only it's like reference implementation or definition nowadays language is nothing without tooling and proper things like editor support which was one of like the first contributions that we had around it so now we have like proper edit support for things for VIN for VS code and other editors as well we also have an online playground which in which you can like you know write teal code and see it compile Lua like straight from your browser it uses Fengari which is a Lua implementation in JavaScript that runs straight on your browser and so teal is like so agnostic to whatever Lua environment is using and Darren who wrote the playground he was able you know to drop teal on top of that and get it running in the browser so yes now you can see a teal running in the browser we also have teal types which is repository with type definitions of Lua libraries which has seen contributions from the community this repos like similar to type chat for Python or definitely typed for TypeScript so for Lua libraries which are not written in teal you have definition files in there which specify which types are used so you can require Lua libraries from teal code and they appear type thanks to those type definitions and of course in this year since teal 0.1 we have seen a bunch of language improvements as well and this has been personally my main area of focus but I have not done this alone so we have a bunch of language improvements added union types got in early on tuple types were added as from a community contribution and like so now like we can say that the developer team is growing around the language like it's not a one person project anymore we have also added method methods to the language we have added support for some more recent like Lua 5.3 5.4 features like integer division and bitwise operators with support for methods and they work for all versions of Lua so even if you are relying on LuaJet or Lua 5.1 5.2 and you run teal you get those operators because we generate the proper code for whatever Lua language you are running on we have had an improved flow based inference and this is an area in which like there is always like incremental improvements and it is gradually evolving because right now the compiler is still super simple in that regard but it already like for a lot of common patterns that you may write that the inference already like allows you to not have to annotate some types or do casts or anything like that and since the language is still in 0.x era we are taking the opportunity to based on also user feedback make some syntax tweaks and cleanups here and there and just make the language better so getting to the title of the talk finally and this is like the most delicate part because I don't want it to sound like vaporware so but I will share some of my thoughts on how I see the language moving forward so what's next for teal well now I feel it's like really just time to start using it the language is already being used by people and we are using this experience to mature the language like myself I want to start using it more and other things other than the compiler and every time I write teal code which is not in the compiler I get some better insights about it and because it's kind of like it's important when you develop a language especially one that itself hosted to try not to end up in the pitfall of writing a language that's great to writing compilers for itself right so it needs to be used in more domains so this is what we've seen from the feedback that we're getting like as little by little we are identifying what it's missing but at the same time if something is missing it doesn't mean that of course we're going to add it for whatever reason we're always going to try to strike that balance between the minimalism and the pragmatism so some of the early feedback that we've gotten and it's still early days in teal's life but actually people are already using teal in production we are pleasantly surprised when a user showed up in our Gitter chat and mentioned that the product that they're working on in production it has like more lines of teal code written than the compiler itself so there's people all there like writing more teal code than me already but and last month we've had our first communion meetup we got some good feedback for features that people are missing and here in the slide I put in some of the things that like to me sound like good candidates for a near future roadmap people have been talking about adding like an explicit integer type like this is especially useful for people who are using 5.4 but even if they're running older versions like it will still be nice to have an integer type it would just not have the precision guarantees I guess but this is not uncommon in many languages which support like multiple targets another thing that was mentioned was abstract interfaces for records and also since we have static types people are longing for optional or require like function arity so that the compiler would tell them if they forgot an argument in a function and this leads to a more open question regarding nil safety which is harder to pull off and if we go the path of typescript then it would lead to even less soundness in the language and then it's a question whether we really want to go that direction I think it's I think it's worth experimenting and seeing where we get there but that one I put it like a more of a question mark but yeah I think there's going to be a lot more discussion about it this year and if we as a community can find a solution for it then it would be great because you know nil as infamously the billion dollar mistake right and in every language that has it but since this is a talk meant to be forward-looking I like to share a thought that I think it's a bit more philosophical as teal evolves I believe that in order for it to keep true to the minimalistic and pragmatic spirit of Lua it needs to take a page from Lua in one crucial aspect like Lua teal is embeddable right you can embed it into any Lua environment like once Lua is already in theirs like trivial load like one more Lua file and teal is there but it also needs to be extensible right and the reason I think that because this is like the only way to avoid having the language growing definitely because type systems evolve people's needs change you know today's like sort of esoteric feature maybe tomorrow's mainstream feature right maybe if teal was created like you know a while ago like now it has generics and mine like in the past not have generics and who knows what people are going to expect like as a baseline you know a while from now and even like for people with specific needs which is one of like the main interesting things about Lua if you have a very specific need you can just extend it for your specific need so a view of like how to do that like so this is how I see a path for extensibility in teal once we consider extensibility in Lua we see that you know Lua has meta tables and meta methods that are evaluated at one time and another source of extensibility is it's Lua C API right those are like advanced functionality like know that this is generally handled by the like the embedders like the people who integrate Lua into an environment into an application and not by descriptors or like the developers who write the bulk of like business logic code in Lua right so teal needs support for extensibility as well in some sense you know and a good parallel for that would be meta programming at compile time and for that it would need sort of like a compiler API so that you know the meta programming that you do could like you know introduce new types and those sorts of things and then you know have its own evaluation rules for type relations and those sorts of things and compile time meta programming is like a topic I've been interested for a long time you know my my undergrad dissertation a long time ago was actually on compile time meta programming so it will be fun to get back into that topic at some point and well extensibility and embedding you know they walk hand in hand this meta features from Lua are often used to make Lua fit into an application like or or an ecosystem so teal's meta features you know can be a solution for easing its integration into existing Lua environments as well so in short well teal is usable today I invite everyone to give it a try it provides an embeddable static titan support for Lua based environments it provides what I consider to be important functionality for programming in the large you know type support and well it's a new language it is evolving it has a welcoming community it's a great opportunity to contribute it's a great opportunity to be involved in compilers and programming languages and all this sort of fun stuff so thank you and once again I extend the invitation for you to if you're got interested you know drop by our Gitter it has also matrix bridge and you can join by either Gitter or matrix and say hello we hang out there and there's links for you know all the things at our teal repo and this is what I had for the talk and we're going to switch to the Q&A session now thank you. Hello so it seems like the Q&A is actually live and working now. Oh cool it's working yeah looks like it yeah let me just mute that there we go yeah so I saw you answered a lot of these questions in the chat as well but let's go through them here live as well so the most the highly voted question was does teal still let you use dynamic typing so can you mix them with the static types that you have added to teal? Yes and well since we're live on the stream now I can't expand a little bit on that answer yes so teal has a type called any which means which accepts anything and it's basically a dynamic type everything's an any and any is an everything so and also it supports typecasts so you can do explicit typecasting when you jump out of the static type world into the world of any and then and back so for example even if you look at the source code of the compiler there's a like little utility function that every little programmer has written at some point in their life which is copy a table from you know make a copy of a table and that one is is generic on that sense that say just take any key of any type and any value of any type and put it in a new table and then when you jump out of the copy you get your type back so it's also possible to keep those worlds sort of separated in which like you can take your nicely typed function arguments cast them over to any do dynamic stuff cast them back and return so it's kind of like you know like people in Rust go from like unsafe back to safety so so yeah you can you can you can do that. Cool and this this is a question just for myself but does this also mean that you can use existing libraries that haven't been sort of given the types treatment? Yes and yeah and and there are also there's also another way to use them in a typed fashion which is just heavily borrowed from TypeScript so the idea is that you can have your your Lua library which might be written in Lua or in C even because Lua has that C API and that you can have for example if you have a library called like lpag.so which is a C library written a Lua library written in C and then you can have a file at lpag.d.tl which is a definition file so the compiler will at compile time it will pick up the DTL file just to read the function signatures with all the types but the implementation is not there it's kind of like a C header file in that sense but then at runtime the Lua VM will pick up the SO file and actually run the C code so well it's up to you to make sure the files match the compiler will not check that it will not verify that it actually matches the C code but for your TO code it will just rely on the types that are written there so that's the way to to to integrate with both. so ya.
This talk is the third part in a trilogy of talks hosted at this devroom that chronicles the birth of Teal, a new programming language that is a typed dialect of Lua. In this talk I will present an update on Teal: we'll talk about the current status of the language and its nascent community, and look forward at what lies ahead for its future. We will discuss a bit about the recent evolution of the project, and where it can go from here while adding more power to the type checking while keeping the language simple. In Part 1, "Minimalism versus types", presented at FOSDEM 2019, we talked about the previous projects that aimed at producing typed variants of Lua, the challenges they faced, and the idea of trying again with a focus on minimalism and pragmatism. In Part 2, "Minimalistic typed Lua is here", presented at FOSDEM 2020, I presented the progress of the project, with a compiler that is able to type check itself and compile itself into Lua code. At that point, the language was still called tl. Now in Part 3, the language has a name (Teal!), and a growing community. In "What's next for Teal", I will report on the advances we've made in the last year, and also address the elephant in the room: people keep asking for features, the language keeps growing, the type system has already made it more complicated than Lua, what about the minimalism? For that, we need to go full circle, revisit what minimalism means in the context of types, and how to approach it in a Lua-like way.
10.5446/53565 (DOI)
Hello all, thanks for coming to my talk. My name is Andy Wingo and this talk is about compiling to WebAssembly. WebAssembly is a target architecture for compilers. In this talk we're going to have a little hands-on intro to WebAssembly and then we're going to proceed to write a tiny scheme compiler. We're going to talk about the bits that we can't get to in the live coding part. We're going to mention the different kinds of questions that you will need to answer if you're considering targeting WebAssembly from your language. If you'd like to follow along, there is a GitHub repository with all the supporting information that you might need. So to be concrete, let's take this simple scheme program, the recursive factorial. We would like to try to compile this to WebAssembly. When I work on a compiler, I like to think about what kind of code I would like the source term to residualize to. In this case, I need to think about what should the WebAssembly look like for the result of compiling the recursive factorial. And happily, WebAssembly is defined in such a way that has a standardized text format that corresponds exactly to the binary. Whereas the binary is WASM, the text format is called Watt. It's meant using S expressions, which is kind of nice. And it's a bunch of sections in the file. We don't need to actually touch on all these for this example. We're just going to look at types, functions, and export sections. So if we switch over to the editor here, start my module. Module. First, I got to think about the types. So what's the type of factorial? T-FAC. We'll give it a name. These things with the dollar signs in front of them is just a name for the purposes of the text format. It's not actually used. It's not reified in the binary. It's just a nice team. So the factorial function takes one parameter. It's a number. Well, WebAssembly only has four different data types. It says I32, I64, F32, F64. And that's it. So we have to figure out how to express our program using this polyset of primates. In this case, we're going to have one parameter, which is an I32, and one result, which is an I32. You can have as many results as you like, as many parameters as you like. Web browsers commonly put some restrictions on these. You can only have up to, I don't know, a thousand parameters or a hundred thousand parameters or something. But in practice, it's not really a limitation. OK. So when I go to write the function for factorial, again, I give it a name. It has type T-FAC, meaning it has one parameter and one result. The incoming parameter will be mapped to local variable number zero. If I had more locals, I could declare them here, like I32, I32. That would mean a total of three locals, including the parameter. But in this case, we don't actually need a local variable. So if I look at the source program, I need to start by doing this if. And WebAssembly is a strange machine. It doesn't have go to. So it has a nested if then else. It's not very useful. The general control for a primitive has is blocks and conditional breaks. So a block is essentially a region of code, which is also a jump target. You can jump to a block. There are essentially two kinds of blocks, not counting the if then else. There's a block and a loop. And if you jump to a block, you end up jumping to the end of the block. And if you jump to a loop, and this is a loop that has to be sort of in scope to the jump, then you jump to the beginning of the loop. And that's all you have. So in this case of my if, I'm going to make one block, block called b1. And then blocks have a type. They have a number of incoming values on the stack. And then they have a number of outgoing values on the stack. So in this case, I'm going to have this block to have no incoming values because there are no incoming values at the start of the program, at the start of the function. And I'll just let its one value flow out to be the return value of the function. And I could give a name to that type tb1. Block type is essentially a function type. So it has no parameters and it has one result. And so type tb1. There are some short hands for block types which only have one or zero return values. I'll just write them out in full here. It doesn't really matter. So now I'm in block tb1. This is going to be one arm of the if. And I need another block to be the other arm of the if. So inside these two statements, but in this case, I'm not going to actually flow out of value. We'll see why in a minute. This will be tb2 with no parameters and results. Okay. I think you're getting the feeling that WebAssembly is a little bit of a weird machine. So look back again at the factorial program. I say if n is zero. So first I have to get n because it's a stack machine. And I do that by doing local.get zero because the parameter n is the first parameter. That's index zero. And there is an instruction called i32.xz which pops the top of the stack and pushes a one if the top of the stack was zero and pushes a zero otherwise. And then I use br if. So if the which pops the top of the stack and if it's nonzero, it will jump to the target in this case b2. So if the top of the stack is nonzero, jump to b2. What does it mean to jump to b2? It means to jump to the end of block b2. That's how blocks work. If you jump to them, control flow continues at the end. So here after b2, let's imagine I'm doing the then branch of the if, which is one. That's i32.const1. That's how you make a constant web assembly. And then this value will simply flow out of the block. It will flow out of block b1. It will flow out in fact to be the return value of the function. So we've implemented the then side of the if. Now you have to implement the else side. So let's see times n. First let's evaluate n, local.0. That's n. And then minus n1. Okay, so local.get0 again. i32.const1. i32.sub. You see this is like the old calculators. It was reverse polish notation. So i32.sub will pop two things from the stack and push the deepest one minus the shallow one. So now we have our two values on the stack. And we are going to call fact. And as you see, as a compiled target, web assembly doesn't have unstructured go to. It also doesn't have unstructured stack access. There's no stack pointer that's accessible to you. There's no link register, program counter, any of that. You can't even alias the stack to anything and walk the stack. This is what you have. Once we call the factorial, we're going to need to multiply. This one stays on stack over the call. So i32.mull, which will multiply the two. And here, recall that the type of this block has no results. So I could branch. I could actually br2. So not to 0. Not to block 0 in depth. And not to block 1 in depth. But actually to block 2 in depth. Because the function makes an implicit block. What I'm going to do is I'm just going to say return, which effectively does the same. And I will branch to the outermost block. That's my function. I simply need to export it to whoever loads web assembly. Because web assembly is something that you embed. So we have our program here. I hope it type checks. Let's move over to our terminal. And we can do wat2wasm. This is a utility from the Wabbit project. W-A-B-T. Wat2wasm, fact.wat. And what the problem is, is that they use the spirit of S expressions. But they didn't actually use reality. So some things like comments like you would use in the scheme, they don't actually work. So I remove my comment. And I see I got no errors. And as you can see, the wasm file is 67 bytes. Which is kind of nice. I can give this a test by, I have a little HTML file that will load up the wasm file. So I need to make a little web server, python-mhp.server. Python3-mhp. So now I'm running a little web server that can go to my web browser. Not that web browser. One moment. Right, this web browser, I open up my HTML page. Oh, I'm getting some errors. Fact is not a function. You know what, I think I need to go back to my Emacs. I actually exported it using the wrong main. I export the function as fact. Fact my terminal. Wat2wasm, restart my web server. Fact my web browser. Reload. Excellent. Got some factorials. Fact 30. Fact 31. Fact 32. Oh, it's negative. That's right. We're using I32 values. Funny. So we have our desired web assembly. It's small. Let's see how we can use scheme to produce this. But again, here's a link to the Wabbit utility which provides this Wat2wasm code. Let's work on making a simple compiler from scheme to web assembly. All right. So if we look at the program we're compiling, factorial, we're going to need to scan all the forms in the file for definitions. And we will transform them into a set of types, functions, and exports. The simplicity will just export each top level function. As the web assembly being a stack-based language, it's quite convenient to just traverse the body of the function just in a direct fashion. And we'll need to record types as we see them. So we'll open a new file, compile.scm. Use mod. Load up a pattern matching facility. And here we go. Define compile port. So we're going to read our forms from a port. And at the end of the compile program somewhere, we're going to say loop through the forms that come in on the port. And unless it's an EOF object, compile form. And then continue. So we'll loop until we're done there. And as you can see, this is an imperative loop. I will just define my types as a little list, which I'll be adding to as I go. And same for the funks and the exports. And then when I'm all done, I'm going to make a module. I'll add to the types in the beginning. So I'll reverse them when I write them into the final form. Reverse types, reverse funks, and reverse exports. And if I load up pretty print, then I should be able to pre-print this form. OK. So now I just have to implement compile form, right? And if I do some pattern matching on the form, it starts with define. It has a function name, it has some args, and it has a body. So first thing I will add func f. What's the type of the function? We're going to have to add this type as we go. We'll assume, let's see, intern type. Now what is the type of the function? How many arguments does it get? How do we choose to represent them? How many return values does it have? How are we going to represent that? We'll just say map arg type over our args. arg is going to be a list. And then for the result, let's assume we just return a single value. But the single value is going to be an i32. As you say, simple compiler. Define arg type arg. What's arg type? Also i32. No big deal. So that's going to be the type of the function. And then for the body, the issue is that this is a classic recursive problem. We need to scan for what definitions they are and then visit the uses of those definitions. So I'm going to delay the compilation of the body by simply putting it inside a lambda. Compile. And we can compile it in an environment. The incoming arguments define a set of local variables that we can use. So we will turn the arguments to an environment. And we will also add this function as an export. Using what name? Symbol string f. So fac, like that. And what identifier? f. And that is our compile form. What about args to end? How are we going to represent our environment? We essentially need a map from name to index of local variable that we can add on to very easily. We are simply just going to reverse that list, seeing how many times, looking for the value in the list and then how long the tail is after that. We'll see how we use that a little bit later. Actually we can define that now. Hookup local in an environment. Match memq local end. Memq searches for a value in a list and returns the head of the list at which it's found. So the last one will have no tail. The length of the tail is zero. The next last one will have a length of a tail is one. That's how we'll do our mappings from names to numbers. Great. So now what's left to do? Intern type. Let's give that a bang. Intern type. So we have prams and results. The type that we are going to reify, we might as well just make it right now. Type prams. Result. This is the Watt that we will residualize to the file. But if we see a certain type twice, we might as well not look for it again. So if match member key types, meaning if we see the type in the list, then we will return the index. Otherwise, the index is the length of types. Set. So imperative. Set. Types. Cons type. Types. Say not key. Let's use type here. And return the index. Intern type. Okay. So intern type is going to give us an index into the types of this module. I think what's remaining is add export and compile exp. Let's go ahead and take care of the export now. We are simply going to do a similar cons onto the beginning list. Set exports. Cons. Export. Name. Funk. But we need to get an index for the function. So lookup.funk.id. We are going to cons that to exports. And we don't need to, we don't care about the return value. Might as well make this other lookup lookup local. Lookup.funk. If head is a funk with a name with that ID. And followed by something or other. With a tail. If the ID equals F then it's length of funcs. Similar to before. Otherwise, occurs on funcs. And if we get to, if it doesn't match it means we've run out of the list and we're actually looking up an undown variable. Does that match what we actually pushed onto the funk list in the add funcs? Compile form. We haven't actually admitted add func. That's the issue. So we get an ID, we get a type index and we get a body. Set funcs. Cons. Funk. ID. Type. Type index. And the body which is a lambda at this point. Consing that onto funcs. Great. I think at this point we have everything we need to go into compile X and this is kind of the meat of the thing. Compiling the expression. As you know, we go back to fact.scm. We have what are the different kinds of expressions we have? We have called to zero. We have constants. We have lexical references. We have calls to top level procedures. We have procedures like minus and we have an if. So we're going to have to match on these different expression kinds. Match on expression. So first of all, what if it's just a symbol? In that case it's probably a local variable. So we're going to say local get lookup local ID. We are actually going to, because each expression in the source language can compile to a number of expressions in the target, we're going to each one of these compilers is going to return a list. That way we'll be able to produce the Watt format more easily. What if the expression is a number? More specifically, the exact integer. We can use i32.const as we said before. What if the expression is an if? If test then else. Okay, as we showed before, we are going to use a block whose type is, in turn, type. We have a bit of imperativeness in the heart here. In turn type of nothing incoming, but i32 outgoing. And how do we start? We start with another block inside it, in turn type, nothing incoming and nothing outgoing. So having set up our two different jump targets, we can refer to the closest one as jump target zero and then the one that's one step out is jump target one. We compile the dot at, splats in multiple expressions, splices them in. We can compile in the test and then in our case we are going to brif zero. So if the test is true, then we will jump to the innermost block. Otherwise, we will continue with the else part of the test. And that we can br1, which would effectively, this would be the same as the return, but instead we could do a return here instead. I know, we don't want to do return because we don't know where the tail part of the function. So br1, this whole expression ends by either, in this case it ends by jumping to this block and thus jumping to its end. Okay, that's the else block. Otherwise, compile then in v. And in this case it ends simply by flowing that one value flowing out. We have done if, we need to do just a few more, zero exp. So compile exp end, i32.exe. That's the double send instruction that we residualized earlier by hand. What else do we got? We got minus. In this case, we compile a and then b and i32 to the sub. Times, in this case it's similar, a, b, i32, and all. And I think the only thing left at this point is the function call. F, well it has to be a symbol. We'll just say F. F, arg, let's see. We'll compile the arguments, and map arg. And then we call, lookup, funk, F. And I think that may be our compiler. What are we missing? A pen map is not in the default environment. Survey one. You have to fix my pen map here. And the expert. And trying that a bit, I had to fix an invocation of lookup local and I had to rename some invocations of compile to compile exp. That's all done. If we take a look at the end of the file, we say when we're in batch mode, we'll look at the command line and take the first argument and pass it to compile. Let's see what we get. So, compile, compile. Oh, we forgot to actually recurse into the procedure. So let's go ahead and do that. So here I've fixed the funx list to invoke these compile body funx and spliced them into the body. And I had to fix a couple of other bugs. Once we do that, we can go to the terminal, run our gal compile fact.sdm, and we can see the result. It looks similar to what we wrote, more or less. It doesn't have a lot of the names in it. And if we pipe that to a file and then run Watt to Wazam on it, we get no errors. At that point, if we switch to our web browser and we refresh, we got no web browser running, so run Python web server. Indeed, it can do fact with 29 and everything. So pretty cool. Back, 10, everything work? Okay, great. So, that is the symbol compiler. There are a lot of things that are remaining to be done. If you take a look into the GitHub repository associated with the talk, you'll see the compiler itself, including an assembler as well, so you don't actually have to run Watt to Wazam on the thing. It'll produce the bytes directly. And I would encourage you to do that because it's not that much code. We're in the minimal languages room, and dependencies are the opposite of minimal. However, our language that we compile doesn't include a lot. It doesn't include closures and tail calls and all of that. And all of these are still, they're pretty big problems, and they're the reasons I haven't ported Giles a WebAssembly at this point. The only real solution today for closures is comprehensive closure conversion of your entire program so that you don't end up making indirect calls. Instead, you call known recipients with the values as arguments. For tail calls, there's, again, no real replacement. There's a proposal. I've actually worked on it a bit in SpiderMonkey, but I had to drop it because it's just not high priority. For a variable number of arguments, WebAssembly is a typed language. You can't call a function with the wrong number of arguments. Wrong number of arguments. You can't have a function that takes a different number of arguments, but you need to store those values somewhere in a data structure and then pass that data structure to the function. So maybe you need a shadow space to pass arguments. It's not very nice. The threads, on the other hand, is coming along a bit more nicely. The thread specification will probably be usable soon enough. With regards to how you ship programs in WebAssembly, you really want to do whole program compilation. If you start to think about linking different WAs and modules together, I think it's not heading towards a good user experience because we're thinking about this in the Web browser context mainly. For exceptions, you can bounce through JavaScript and use JavaScript's try catch, but if you're doing that, then maybe you should consider compiling to JavaScript directly instead of WebAssembly. There is a proposal which is hanging down the line, which might be usable this year, exception handling, which provides the very minimum of non-low control flow. But it could be extended with effect handlers, which would give us coroutines. One-shot continuations that we could use in different asynchronous idioms. But the big problem is GC. I worked on one compiler to WebAssembly Schism, a schema implementation. It uses I32 as its value type, but not everything is an integer. It's a tagged value. So if it's not an immediate fix num, then it's a pointer into linear memory. You can amend your own GC and that's fine. But the problem is you can't trace references on the stack. You can't move values in linear memory and expect the stack to update to point to those new values. You have essentially no relationship with JavaScript. JavaScript already has a fantastic garbage collector in all the browsers. The approach that I have taken to switching to is if you have a language in which you need to represent different kinds of values and they're all garbage collected, I would use extra and ref as your fundamental value representation. That's the reference type specification, but hopefully again will become usable this year. And eventually it should be possible to access structured garbage collected managed objects in WebAssembly itself. And there are relationships to closures. There are lots more questions. What do you do about strings? It's a big problem. What do you do about access to WebGL? There are solutions and happily the import functionality is quite good. And then if you're operating on more traditional languages, there's a complete other tool chain that I'd be happy to talk about some point as well. I'll work on LVM WebAssembly stuff as well. And do you actually take your language as it is or do you fork it and give it some different semantics that are adapted to the platform you're targeting in the web browser? Big questions. I'm happy to answer them. I think I'm out of time, but thanks for listening. Thanks for hanging out and happy hacking.
An introduction to compiling to WebAssembly, from a low-level perspective. Learn more about the nuts and bolts of targetting basic WebAssembly 1.0, as well as a look forward towards extensions that are still in development. Let Andy guide you through the general shape and the low-level details of WebAssembly as a compiler target. As a significant contributor to Firefox's WebAssembly implementation, Andy is in a good position to know what you can ship in practice, as well as what you might expect to be able to ship in a year or in two years. He has also written compilers that exercise more experimental features, such as integration with host garbage collectors, tail calls, and more. His experience working on the WebAssembly backend of LLVM has also been illuminating in ways not limited to compiling C and C++ programs.
10.5446/53568 (DOI)
Hello and welcome. Thank you for joining. I'm Florent. And today I would like to talk about Angular Test First development. So I would like in this to show you a way of writing your test before the implementation of your Angular component in order to make sure that your component can be easily refactored and that your tests are focused on the business of your component, the logic that you want to implement and not the implementation itself. To do so, we're going to write this small component, which is a drop-off point selector. So you can switch between pages and show the location of a point into the map. Okay, so let's get started. So the component is completely empty at first. So the template is empty and the controller is empty too. We just have the input to retrieve to get the data. And in the test file, I only have one function to generate the drop-off point data. And I can just overwrite you properties of this data. Okay, so let's write the first test. What's going to be the first test? If we try to write the first state without any specific code, what do we want in terms of feature? The first thing I want is the list of the location with the name of the location. And I want only the first page because I will do the pagination at the same time. So let's name this a test should print one page. And one page of location. So let's summarize what we want in the given when, then so in the given, based on a certain list of points, I want when it renders. And then I expect, let's say five, let's say we want a pagination of five locations with names A, B, C, and whatever. So that's going to be our first test. Given a certain list of location, when it renders, I expect to have a subset of the list, the first element, and I expect to have the names. How can I write that since I don't know how the template is implemented because I did not implement it yet. So before thinking about the template, let's just create the test data. So in the drop off point array, I'm going to just generate some data. So to generate data, I'm going to use the function I defined before. So it's just a builder. So thanks to the drop off point builder, I can just overwrite the name. I don't care about the other data for the moment, whatever. And I want to build that to have just the value. Okay, I have some mocks here, which allow me to, yeah, I'm just going to copy paste the name of a few names. I'm going to take like seven of it. One, two, three, four, five, six, seven. Yes. Okay. So given this list of location, when it renders, how can I define it renders in the test way? It's simple. Angular give us something for us, which is the text changes. So we need to tell Angular, yeah, detect some changes and render the template. So it's when here. And now the tricky part, how can I retrieve the data from the template? I don't know how the template looks like. So in this case, I will need a contract with the template. I don't know how it looks like, but I want at least to define how I want to retrieve the value, the value of the name of each location. So to do so, I'm just going to assume that the HTML element that will have the name will have a certain class. And the class I want to use is going to be, let's say, the class drop of point name. I want this class. So now I want to retrieve all the elements in the HTML that have this class. So to do so, I also have something in Angular, which is in the debug element. I can query all. So in the query all, I can give a predicate, so a way to filter the element I want and the element I don't want. And I also have something to do it easily with Angular, which is a buy. And when I say buy CSS, I can use a CSS selector to determine whether I want or not the HTML element. So I'm going to write it as a class selector. So I'm going to use a dot and say drop of point name. With that, Angular will give me every HTML element, every not HTML element, every element that has that have this class. So it's going to give me an array of debug elements. But what I want is not an array of debug elements, but an array of HTML elements. So I need to transform that into HTML elements. So to do so, I'm going to just iterate over it. So I'm going to use a reform to encapsulate that. And in my array from, I'm just going to map it. And for each debug element, I'm going to retrieve the native element of it. So debug element, native element. Okay. Not arrays, but array. Okay. Perfect. That's why. So with this full line, what I will have is an array of HTML element. As my bad, an array of HTML element. But that's not only what I want. What I want is the value of that. So first, I'm going to just extract that as a function, because I might use it later. I will call it find all. And I'm going to put the selector as parameter. Okay. String. So I'm going to find all. All drop off point name classes element. Let's return that. And it returns an HTML element. But as I said, I don't want only the HTML element, but I want the value of it. So I'm going to map again. And since this is an HTML element I receive here, what I can do is from it, I can say I want the text content of the DOM element, the real DOM element. And since it's HTML, it can have a blank space between and at the end and at the beginning. So I'm just going to trim that. So by querying the template like this, I create a contract saying that I want to retrieve each element marked as drop off name, and I want the value. So these strings, these strings are the drop off point names. And now, thanks to this array, I can do some expectation of it. So I can expect that the drop off point name length to equal like five, because I want only five locations at first. So I don't want all of the location here, but only the five first. But at the moment, I only verify that I have five. So I need to verify that's the five first. To do so, I'm just going to say that the drop off point names contain to contain. And to contain, for example, Albert. I don't use to equal and having an array after because I don't care about the order of the list here. So I'm just using to contain, which is more flexible. So let's do it for the five, the four remaining. Oh, yeah, I have multiple time. But yeah, the idea is there. One, two, three, four. And so I missed this one. And I also want this element. Okay, so now I expect correctly that I have five element, five location name somewhere. Okay. Okay. So next test. So next test going to be almost the same, but instead of just having the element to render, I want to go to the next page. So let's almost copy this test because it's going to be almost the same should print the second page of location. And this time, instead of just having fixed your test exchange, I want a way to click on the next page button. So I want an element with the click. So I need to create a new contract. So I need to create a new function find because I don't want to use find all because I only want one this time. And I want to find some things that have the next page marker, let's say. So this function is going to be almost the same as the find all. Let me put that below and let's create the find function. So the find function is as defined, but instead of querying all elements, I'm going to query only one. And after I'm going to use the native element, create the native element. So this way I have defined as simple as defined. Okay. So I have my next page. So this next page is something that is an HTML element. So I don't know. Finally, I don't want to use a var here. I just want to click on this element. Oh, my bad. It's not going to be an array of HTML element. That's why the completion is not correct. Yes. So I want to click on it. And I also want to detect that the change is off Angular. So I need to rerender the template again. So now I've done the next page. I don't want to have five elements, but only two. And it's going to be the two remaining elements here. So let me change the assertion here. Okay. Okay. Let's save that. So now we know how to next page. So from the first page, I know how to go to the next page. We verify that the names are correct. So let's do the same for the previous buttons. It's going to be almost as the same as the first. The first test, but instead of just detecting the changes here, I'm going to do the same as next page and also do previous page. So I'm going to do next then previous to make sure that, yeah, it's not because it's print at first. Let's let me correct that. Okay. So we have quite some tests now. We can maybe start by writing an implementation for that. So let's go to the template. So I'm going to use a table and create a new row. And for the new row, I'm going to create a new cell. Okay. And I'm going to do a for each and before let drop off point, drop off point of drop off points. I want to print the location name drop off point singular location name. But I still need to say that in the template that yeah, this is where the name is located. So I need to add the class here saying that the drop off point name is here. And I also need two buttons. One saying that this is the next page, next page and same for previous, previous page. Okay, but I also need to implement the page here. So when I click, I want to do next page, create this method. And I want to page to be plus equals one. So let's create this field. It's a number starting from zero. And in the previous page, previous page, this, let's get to that, create that this page. And minus one. Okay, let's have a look to the test. They were failing. Forgot to show you but yeah, that was obvious. Oh, yeah, I forgot something. I forgot to use the pagination here. So what I want is I want to slice, slice page times five, two, page plus one times five. Okay. So let's have a look to the test. Test on our green, that's cool. And we can have a look at the implementation. The next page is working the previous page. It's working perfect. So now we want something to select a point to show on the map a point. So what we're going to do, let me commit that first. The implement page in issue. Okay. Okay. Okay. Next test. Next test. Now we want something close to that. So in this test, I don't want to verify the name. I don't want to change that. I just want to have the same value. And instead of having the name here, I will also override the value of the coordinate. We've longitude here. Let me find some data. Okay. Okay. In the spec. Okay. Perfect. That's the last one I could be. Yeah. Perfect. Okay. So now I have points with a certain coordinates. And I can verify that when I click on show on the map of this element, it's going to show me on the map what I'm looking for. So same as before, we want to first render the list. And after I want to find the button show on map. So new contract. I want an element clickable that have a show on map. Show on map. And when I want to click on this one, let's use a find all, by the way. Okay. And for example, click on the first one. Then I want, then what I want, by the way, yeah, what I want. What I want is now I want to have a map shown with the location, the coordinate of the location, the address. I don't want to do it in this element. So I'm going to delegate that to the map component. So I can do the fixture, the text changes here, by the way. And now instead of finding an element with a selector, I can use a different kind of contract saying that instead of querying by CSS, what I can do is querying by directive like this. And I can say I want to select a map component. And I don't want the native element because I want the Angular element of it. So what I want with that is the component instance of this map component. And this map component has a certain contract with me. It has some latitude and some longitude. So when I give these two values to the map component, I know that the map component will display it correctly. So I don't want to verify if it's correctly displayed. If I give to the map component the correct data, it's going to be correctly displayed. So what I can verify if I do that is I want to expect that the map component latitude is equal to the first latitude. So this value and the longitude to equal to this value. And by doing that, I'm sure that it will, when I click on show on map, it will render the correct way because I delegate to the map component the fact to show on the map. So it's not over yet the test because since I have a new component, a map component, now I need to declare it in the test bed. So let's just add it here. And now I can check that the test is failing. Yeah, I have a correctly failed test. So now let's implement it. I want to have a map component here. So it's going to be an app map. I want some latitude and some longitude here. And I want which latitude and which longitude. I don't know yet. So map, location, latitude and location, longitude. I can create this both element, public and do the same for longitude. Okay, so also what I want is now is a button that will show on the map. So let's create a button here. I need to respect the contract by saying that the class is show on map. Show on map. And when I click on it, I want to show on map this location. So, okay, I'm going to create this method. And this method will simply assign the latitude of the drop of point and do the same for longitude. Let's verify the test. Did not miss anything. Okay, test is green. That's a good sign. It means that, yeah, when I click on show on map, it shows on the map. Perfect. Let's commit that. Okay. See how we can write the test before the implementation and be sure that it's correctly displayed after. To summarize, this is a way how you can do some template contract in order to test your component. I hope you enjoyed and learned something and see you right now for some live question. And thank you for your attention.
How to write Angular components that can evolve ? How can we write components that can easily be refactored ? How can we write tests that won't be useless as soon as we change the implementation ? We can achieve all that by writing the tests first, writing tests that are agnostic of the implementation, so writing tests that focus on features. When it comes, refactoring an Angular component can be delicate especially if the refactoring breaks all existing tests. Adapting the tests while refactoring is always a risk, it can make the tests useless. So how can we write tests that aren't coupled to the implementation ? We can do it by writing the tests firsts, if there is no implementation, tests can't be coupled. Let's see how we can write theses tests without knowing how the component looks like.
10.5446/53569 (DOI)
you Hello everyone, first of all, thank you so much to the organizers for making this special for them and this JavaScript that's possible. I'm David Moreno and I'm going to present you Bavia XR Virtual Reality Data Visualizations using only Frontend and what is better to start than showing directly live demo? Well, technically, this is a video. So this is the first demo. Record it in a browser and show in a scene, a 3D scene, with a data visualization with some interaction with the keyboard for moving and the mouse for selecting and hovering. Now, as I said, this works in a browser, so as probably you notice, this is based on Web XR and this scene is also compatible with VR glasses. This example shows the same scene but using an Oculus Quest 2. And we can see that the data visualization is also compatible with the Oculus Quest 2 controller. Maybe it doesn't seem amazing but what if I tell you that this is the code for generating this scene? And yes, it's not technically code, it is just these few lines on HTML. This is possible using two scenes, A-Frame and Bavia components. I'm going to get deep into A-Frame and Bavia components in the presentation. So I'm going to show you another demo. This is the other example that I want to show you and well, this is HTML. If we add a set of libraries and more lines, we can create scenes like this one, a complete launch, like a museum, but showing data with the possibility of interacting with the charts using the cursor. And well, you can see that in these scenes there are different kinds of visualization. And one of the most important one is the one with the green base, this one, I'm going to stop the video here. That shows a city. This is one of the special visualization that Bavia has included. I will show you more information about it in the presentation, at the end of the presentation. And well, this is the same example of the launch, but using the VR glasses, Oculus Quest 2 as well. And well, it is completely compatible with the glasses and the control. You can select it with the right control. And the code, the lines of HTML are basically the same, but adding more entities. I'm going to explain what is an entity in the presentation as well. As you can see here, we have added the Bavia components script. And we define here the different things of the scene, the sky, the camera, the different visualizations, and so on. Perhaps you notice that there is an empty space in the middle of the scene. Don't worry, at the finish of the presentation, I will add a visualization there in order to show you how simple it is to add this kind of visualization in a 3D and VR scene using a frame. So going back to the presentation, I want to say that the presentation is full of QR codes and links to the examples, the documentation, and some more useful information that I'm going to show in the presentation. And now I will share the slides or they will be available now in the first and talk description. The first thing that I want to show is the main page of Bavia. Well, it's a common page. It includes useful information about the usage of the components, the documentation, how to build a scene, how to use the components, link to the samples, and so on. I just want to say that this is still working progress. We appreciate the feedback and if you want to contribute, the code is hosted on GILLAB. The essence of the code is GBL version 3. This is the link to the first example that I showed, and this is the other link to the launch example. You can try to enter the example using your smartphone, GBL glasses, or just the browser in your computer. So the main technology that Bavia uses is A-Frame. Bavia completely relies on A-Frame. And what is A-Frame? A-Frame is a web framework for building 3D AR VR experiences with HTML and the entity component. A-Frame is developed on top of 3DS and again, with this just few lines of HTML, we can create scenes like this one. And well, A-Frame uses the entity component system definition. So following its documentation, entities are contained objects into which components can be attached. Entities are the base of all objects in the scene. Without components, entities neither do nor render anything, similar to empty divs. Components are reusable models or data containers that can be attached to entities to provide appearance, behavior, and or functionality. Components are like plug-and-play for objects. All logic is implemented through components and we define different types of objects by mixing, marking, and configuring components. So Bavia uses components as its main library. And I'm not going to stop on systems because we are not using systems, but if you want more information about it, there is a link here with a QR that will redirect you to the A-Frame page. So specifically, entities are represented by the entity element and prototype. Components are represented by HTML attributes on an entity and systems are similar to components, but I'm not going to stop there because in Bavia we don't use systems. And what is the logic? The logic is in the component definition. The JavaScript code is there. And this is how the component has to be defined in A-Frame. First of all, we have the schema. The schema are the attributes in the HTML definition of the components. And we have to define the type of the attributes that we are going to use in the component. We have here this example and we see that the attribute bar is defined here and the type is number. And then we have four predefined functions that will be executed some moment of the same execution process. First of all, we have the init function that we do something when the component first attacks the update function that we do something when the component data is updated. For instance, if you change the value of one of the attributes using JavaScript, for instance, getting the element from the DOM and changing manually the value of one of these attributes, the update function will be executed. We have the remove function that will do something when the component or its entity is detached and the tick function that is really important that we do something on every scene tick or frame. Well, Pavia XR is a set of components for data visualization, but it includes components that are not just visual. It includes components for query data, filtering data, and visualizing data. So following this stack, the components starts with the query data. But before moving to the next slide, I want to let you know that this QR at this link will redirect you to the A-frame documentation with more information about how to write a component and more details about it. So the first components in the stacks are the components for query data. We call them queriers. We have currently three types of components that retrieve data, three types of queries. The query is JSON, the query is ES, and the query is GitHub. The query is JSON, we retrieve data from our URL that has a JSON. We have this simple sample to put here, the URL of the JSON, and the component will retrieve the data. The query is relative data from an elastic sets database, and the query is GitHub, will retrieve the data from the GitHub REST API. When the components retrieve the data, the component stores the data in one attribute of the component. The next component is the filtered data component, a component that is attached to a query and filters the data that the query retrieves. That's it. The component is still working, so we have to add more filters to the component. The next component is the component that is called PISMapper. This component maps the data fields to visualization properties. For instance, this component is used for putting a field of the data as a hate in a 3D bar chart visualization. Or put another field as the X axis and so on. And finally, the final components of the tags are the components for visualization the data. We have included so far the next one, the pie chart, the donut chart, the 3D and the two bar charts with two or three dimensions. The 3D and 2D cylinders, the same that the 3D and 2D bar chart, but with the dimension of the cylinder. The bar chart and the final component, the last one added is the code city component for visualization code as cities. Returning to the second demo of the presentation, the demo that I did at the beginning of the presentation, we are going to fill the empty zone that we saw in the lounge. And in this case, we are going to add the components in the bar chart. We are going to add a 2D visualization representing a bar chart with the configuration of the legend, the axis, the maximum height for normalized data. We added a filter data as well and a mapper in order to put the fields in the X axis and in the height. So this is the lounge completed, we have now our visualization in the empty zone and the example is now completed. As you can see, it's really simple to add a visualization in HTML, just these three lines. And this is more complicated because we added a filter data and a mapper, but you can add just the component of visualization and put and digit the data. And the next point is the code city component, so let's go for it. But before starting with this component on the examples page of Bavia, there are visual examples of each component described in this presentation. This is the link to the examples page. And here you can see different squares with the different samples of each component. And I want to leave these links here where all the information and the details about the component definition, the attributes of the components, and the user guide for using the components. So this first link redirects to the rhythm, where there are a small sample and all the API of each component. And this other link is a component social guide with a step-by-step guide for using the components stack. So if you want to take a little later, here are the links. So focusing on the code city visualization where, or city visualization, I want to say that the visualization relies on the city metaphor for visualizing data that has a tree format. It's just data with a tree format representing as a city. But we focus on the study of the visualization for visualizing code. I mean, the city represents a software system. Each building represents a file of the software system. And the quarter hierarchy represents the folder hierarchy of each building. And the major representatives are the height and the area of each building. The height of the building represents the lines of code of the file and the area of each building represent the number of functions that the file code has. Obviously, these values are normalized. But adding more functionality to this visualization, we are currently working on the time evolution of the software city, changing the layout of the city as the code is changing along the time, observing how the building changes their area and their height, or even disappear as the code changes between commits. For moving between this time snapshot, there is a component called UINAPBar that is this one, that allows the user to interact with it moving the city between the time snapshots. This visualization is fully configurable with all these parameters. I'm not going to stop here because there are a lot of them as you can see, but in this link, there is the user guide with all the description of which one, and again, a step-by-step user guide in order to understand the visualization of the component. And the same with the definition of the component in the HTML file. But well, there is an important point on visualization that is the data. For producing the needed data for this visualization, we use a different set of tools that are in the Grimoire lab stack. We are starting analyzing a repository using Gral. Gral leverages on the git backend of Perthable and enhances it to set up a doc source code analysis. Thus, it fetches the commit for the git repository and provides a mechanism to plug the Part 2's libraries focused on source code analysis. Specifically, we use Gral to obtain a matrix related to code complexity of our repository, and then Gral stores the data in the last search, in an last search database, in the last search search engine. Then we created a Python code included in the Bavia XR components repository in order to get the JSON data with all the information that will be ingested in the code city component of Bavia XR. Again, this is a link to a more detailed explanation about the process of getting the data. And this is a complete use case with all the needed steps to create this time-evolving city, step-by-step, using a specific use case. In this video, I show the time-evolving city of the Shorting Hat project. This is the link to the Shorting Hat project. Well, I cannot say it right now because of the presentation. And this is the link to the video. I'm going to put just a little bit. As you can see here, we can see the UINNAT bar with all the time-evolving snapshots that we can navigate. And if we hover a building, we see the file that represents, and if we click on a quarter, we see the folder that this quarter represents. And, well, this is the visualization. We have included as well some tips that I'm going to show you now. For instance, this is a tip that explains, for instance, in this case, the interaction with the city. And this explains a little what is the city about, what it represents. And if we click on a dot in order to change the time-evolving snapshot, we can see how the city changes its layout. For instance, in these two points, a file disappears. There is another tip with instruction to use the navigation bar. We can see how the city evolves. In this case, we are moving from present to past, and we can see how the building disappears. And, well, this visualization is completely compatible with the Euclid Quest and the smartphone, and I said, when I set the smartphone, I said any device with a modern browser that supports WebXR, but if you are using a VR glasses, it has been tested with Euclid Quest and Euclid Go. You can use the controller for interacting with the city. So if you click, as we can see in this example, we can change the same in the time-esnaps and see how the city evolves. Just to finish, I want to show some other components that are under development right now. Starting with the Iceland component, this one, that provides a new layout for the city visualization. So the data is the same data from the cold city visualization that I already showed. It's a three-data format, and we have the goal to have different Iceland that maybe can represent different software systems, and we can see the evolution of the Iceland as the code of the software systems evolves. The same as the other visualization, but with a new layout that can provide different features than the other layout can provide. We are developing as well the Terrain visualization from Proginter Range that represent data, and they taught them for dynamically changing some visualization between different set of data. Of course, any other idea is more than welcome in the next months or so. More components are coming soon for sure. And now, actually, to conclude, this is another branch of research in BaviaXR, the branch of augmented reality. I want to ask you if you are watching the presentation to use your smartphone to scan this QR right now. Once you scan it, it will redirect you to a web page, and it will ask you for access to the camera. I promise that I will not steal any data, trust me. But I will let you to accept, and once the camera is activated, you will point the QR code again, then you will see data visualization in augmented reality. This is a video of the process, just in case you want to follow it. So this is the link, it will redirect you to the web page, it will ask you for camera access. And then if you accept the camera access, you will see in the middle of the QR some data visualization related to the Grimoire Lab project. Okay, so just in case you are interested in the BaviaXR project, we have a contributors guide to contribute to the project. We are under the GPL version 3 license, and I just want to say that any feedback, any tomato or whatever is more than welcome. And that's everything. Thank you very much for the organizers again and to you for your attention. I'll let me do in the Q&A session, just in case these are my emails if you want to reach me after the program. Thank you very much and see you right now. Okay, the live should start very soon. Okay. Now, we are live, right? Yes. Okay, so hello everyone. I hope you can see me and hear me. Yes, I think so. Well, thank you very much for watching and well, I'm here for the questions. So let's go. I wanted to say that the slides are already linked on the FOSDEM Talk page, just in case you want to run the QR demos. And well, I wanted to say that on the baby page, there are links to tutorials, to the live demos and all the information to build these things. And if you have any question or suggestions, you can send me an email or you can open an issue in the repository. I really appreciate it. So let's do it. Okay, so let's see if there is any question. Or to me, whatever you want. What I'm going to do is I'm going to share my screen and live here the last demo of augmented reality, just in case if you want to run the demo with your smartphone. Maybe this is a good idea. Yeah, I think you can see here the last demo, because I think it's the most visual demo and you can run it just with your smartphone and your camera. I hope you can see that we can have a beer there in Brussels. Let's see what happens. No questions. I'm going to put the video just if you want to know how to do it. So as I said in the presentation, you can go to the link, then the page asks you for for the camera permission, you should accept it. As I said, I don't go I don't, I don't still any data so I don't steal any data. So when the camera sets the pitch when you have the permission of the camera you see pointing to the qr augmented reality dashboard. Just in case to, if you want to try it. No questions. I encourage you if you have a beer glasses to try the qr demos that are linked in the in the slides in the slides. So, and if you have any feedback, please tell me this word me an email or open an issue, because we are currently working on, on adding more features to the VR integration, more, more interesting with controllers so please try it if you have a beer glasses and you can help me. Okay. So, I have a question here, maybe you can talk about how that is composed filter it is interesting. Yeah, I talked, I talked about that in the presentation in the body components in the body set of components. We have components for query data that I said, the components query data, just when a Jason or criminalistic database or putting the gift, the gift has rest API. And with this data, you can fit it with another component that is called filter data. Once the data is squared, you can visualize it with the visualizer that I already saw. So, all the information is in the presentation I think in the, in the, in the least that I, that I, that I have added in the presentation links, due to the to all the information about how to build this kind of things but if you follow the slides you can just do it. So, you should be able to do it. And, well, as I said this runs in every modern browser so just try it is just five lines of HTML. You know, it is works on web XR so it's pretty easy to use and pretty easy to configure. And as I said again, in the pages that I link in the presentation are full of information and step by step user guides and so on.
BabiaXR is a set of front-end FOSS modules for VR data visualization for the browser. BabiaXR is composed of different modules (for querying, filtering, and representing data) based on A-Frame and Threejs, with the goal of making it very easy to create different kinds of data visualizations (bar charts, bubble charts, cities, ...), by exploiting the power of WebXR and regular web front-end programming. There are plenty of tools that can analyze data in many ways, most of them are composed of a full-stack app, and the well-known front-end libraries limit the visualization on the 2D and screen environment, just a few of them try to visualize this data in the browser integrating other technologies/environments like webXR. This is the goal of BabiaXR, a set of front-end modules for 3D and VR (virtual reality) data analysis. The visualization part of BabiaXR is based on A-Frame and Threejs, providing a collection of components for querying data, filtering data, and creating different kinds of visualizations, all of them developed using just front-end technology. Among the visualizations, there are common ones like bar charts, pie charts, bubbles chart, but now we are moving beyond that, exploring new ways of show data in 3D/VR. For example, we are working on representing software projects using the city metaphor, showing the evolution of the project as the evolution of a city, with building corresponding to the different files. In this talk, I will do an overview of BabiaXR, showing different examples of the power of WebXR and A-Frame with different visualizations, and how simple is to develop this kind of module using just JavaScript and other front-end libraries. Then, I will analyze a city corresponding to a well-known FOSS project, showing its evolution in different time snapshots and explaining how the code evolves as the city does.
10.5446/53570 (DOI)
Hi, so this talk is really a bit of a dive into how the Bangal JS came to be without needing a massive R&D budget. So Espionage is an open source project and it's been bootstrapped using crowdfunding and sales of hardware that runs it. There's no VC funding or big office so it's just me working from home. So what is it? It's a smartwatch like this with GPS and you can program it really easily in JavaScript. So why would you want one? Well it's nice to be wearing something hackable but also you can change any part of it if you want. The whole operating system, even the interpreter are open so if you don't like something you can go in there and tweak it. But also your data is not stored online and you won't find that with something like Pebble that was dependent on servers that might go down. This is designed basically not to need anything proprietary or that could go down. You buy it and it will just keep working forever. So the other part of it is that you or someone you know might have a hobby like sailing, biking, flying or anything where you want some specific information recorded or shown on a watch face. And maybe they're off the shelf options but chances are for things like that they'll be low volume and very expensive and may not actually do exactly what you want anyway. With BangalJS you can just hack something up really quickly and have exactly what you want. So you can actually make your own hardware from scratch like the person who's done this which is really cool. The problem is it's actually really difficult to make something that's very compact, very sturdy and also reliable and waterproof. You know this is cool, it's fun to wear a bit but I doubt he's going to wear it every single day or if he does it's probably not going to last very long. So you could just write an app for Apple Watch but you know it's not open, you can't change bits of the OS that you don't like and the subscription to the app store that you'd need to distribute an app for it costs more than a BangalJS every year. So you know it's not great really. Just engineering it to write your own OS isn't really an option either because it's pretty cutting edge, there's nothing really in there that's hugely user serviceable, it's actually got code in there specifically to stop you hacking it and if you make a mistake it's a really expensive mistake. It's not like a cheap watch that you could afford to lose if something went wrong with it. So there are a bunch of cheap smart watches out there, the things you see in here are this is a screenshot actually from a year or so ago and they've all changed again but generally a lot of these are made with technology that's a bit old and that's how they keep the price down and that actually makes them much more approachable as well. And the other nice thing actually is that the hardware is usually kind of okay and it's got quite a good battery life as well but the software is absolutely rubbish so it's ripe for having something else put on instead. So there are a few different categories of watch, there are some which usually use MediaTek chips and these are basically feature phone chips so they're getting towards the kind of complicated end of the hardware side of things, you're looking at slightly lower battery life but kind of much better graphics performance and probably even a GSM as well as GPS built in. So they're quite nice but probably a bit hard to get to grips with. But at the low end you've got some sort of Chinese manufactured chips like Hunter Sun and these they do Bluetooth low energy, they're a microcontroller but there's virtually no documentation at all. And then kind of sitting in the middle in like a really nice spot is the watches that have a Nordic chip in them and these specifically are the Nordic NRS 52. The ones I've been looking at are NRS 52832 so Nordic Semi-conductor is a semiconductor company from Norway, they use ARM chips in there. Actually this one is 64 megahertz 64K RAM 512K flash but there are actually higher end ones as well and they're just very very approachable and very well documented. So it's a really good thing to start from. So when you actually get to watch most of these because they're again there, they're made at reasonably low tech factories. They have programming pads on, you can see them here and they're actually, it's a completely standard programming interface for ARM microcontrollers called SWD. Four pins, ground power, two data pins. So it's dead easy to crack this open, attach those and start programming and actually debugging properly. So what if you don't want to open the watch? Well it's a bit tricky. So some watches like the Wrangler have actually, they've got screws on the back of them which makes it nice and easy. But some of them are glued shut, you know they're sort of semi-disposable things that were never intended to be opened once they've been put together. And for those it can be a complete pain. However there is a chance because while you wouldn't think that a company would ship their product and allow you to update firmware without a password or anything, you would actually be wrong. So what happens is these NRF 52832 watches especially, the chip first shipped with a SDK and the first like decent one that everyone seems to use is SDK 11. And that came with a completely insecure bootloader that would allow anything to be uploaded to it. And that allows you to switch into bootloader mode without even having any physical interaction on the watch. So if you see someone that has one of these old watches it is actually possible to completely change their watch firmware or break their device while the watch is on their wrist without them being even aware of it. So it was actually fixed in a newer SDK not that soon after but everyone had already made the watch firmware using SDK 11 and they just stick with it even now. So you still find that maybe 80% of these watches use SDK 11 and with a few hacks because the bootloader was pretty broken anyway but with a few hacks you can actually make it work properly. The newer chips 52840 things like that they use a newer version of the SDK which doesn't have the problems and then you can't update them wirelessly. So it's another reason that sticking with this type of watch is actually quite nice. So I really wanted to watch for GPS on it and for GPS and NRS 52832 you are extremely limited and this was basically the only watch. It's called the DC number 1 F18. It's called a 240 x 240 pixel touchscreen LCD. You might notice that actually a 240 x 240 x 16 bit is about 128K which roughly which is actually twice as much memory as you have RAM in the mic controller which adds its own interesting issues. So yeah it's got a compass, accelerometer, big flash chip, heart rate monitor and yeah GPS. So it's quite a neat little watch. So this isn't actually the first watch that I've looked at. This is a really small selection of the watches that I have had to dismantle and look at. And the annoying thing is that it's not specifically the hardware that's a problem in most of these cases. It's the manufacturer. It's really hard to get the manufacturer on board and say hey we really like your hardware but we want to completely replace your software. And in the end I opted for finding a watch that where I didn't need the manufacturer on board. I could just buy the watch and I could just flash it when I got it here. So when you get inside the watch you can literally just unscrew the screws around there and then you pull out this nice little plastic puck. And in it it's got the LCD right at the top of it. It's got on the backside it's got a little flex PCB with the heart rate monitor and some buttons, the charger. You can actually see there's a speaker in here in the battery and the GPS area right at the back. So it's a nice easy little arrangement. You can actually start to unfold it. These things are only held on double sided sticky so you can pull it back and you can completely unfold it and you can see everything here. So we've got the little PCB with the buttons on, got the speaker, the battery, this is the LCD and this is the circuit board with the mic controller and the GPS and everything else on it. So this point, I mean this is for a Jarskate Dev Room so I should probably point out how circuit boards work. So PCB stands for printed circuit board. This is a, it's pretty much a sheet of glass fibre with copper tracks on it that connect all the components together. This is a picture of a, it's a hand etched board. And it used to be that pretty much all the circuit boards looked like this and they slowly kind of moved on over the last almost 100 years I guess. So yeah, you've got copper tracks on here that connect the components together. Now sometimes as stuff's moved on you have something called sold resist which is often green which goes over the top of it which basically it stops the solder which is the sort of molten lead effectively, molten tin that you use to connect the components down to it. It stops that spreading out all over the place and making a complete mess and it also provides some insulation and stuff. This is a two layer board so you have one layer on the top, one layer on the bottom, both of which you can see just by turning the board over and you have little veers between that can connect them together. And the nice thing with this is you can have wires that cross over whereas on a single layer board you can't easily do that. So designing things like this works out quite nicely. If you can fit all the components on one side of the board it makes it much easier to fit them all by machine. And this is where a lot of stuff is done. Two layers is really handy because actually you can reverse engineer it. There are companies that will do this for you if you really need to know and you can't see because, so for example we've got a black sold resist here so you can't easily see the wires. You can just unsolder all the components, you can sand it down, you can stick it in a scanner and you can actually see where everything is connected. You do this on the other side of the board as well and you can work out the entire circuit and you can do this for any device regardless of whether the manufacturer wants you to do it or not. Other things get more complicated. So sometimes two layers still isn't enough to connect everything to everything else, especially when everything is really densely packed. This is a picture of Raspberry Pi. It's got, I forget how many layers it's got, I think it's either six or eight layers. So you've got one layer on the top, one layer on the bottom and then lots of other layers of copper all the way in between. And obviously you can see the layers on the top, you can see the layers on the bottom but you can't see anything in between which means that really reverse engineering it is out. So unfortunately this is what's in the BangalJS. It's a four layer board. So you can tell this because, oh, unfortunately my camera's in the middle of it, but you can see that there are some vias here and they don't connect to anything at all. And so that means that they're connecting to something inside. What you can do sometimes though is you can, so you can follow a via round. So here I've drawn on all the pin names and perhaps we're after the SWD pins which are the ones that we use to program it. So you can see they come out of here, they go down to some vias and then they pop out the other side and the wires come up here to these pads and that's how we can tell how they are and what they work. This is kind of okay. All the chips come with data sheets like this apart from again one or two Chinese chips that aren't documented in a way that we can actually get the data, but most chip manufacturers do just, they give you all of the data you need to use their chips because it sells some more chips. So you can look at all this stuff and then you can figure out based on the PCB and these data sheets kind of where things are connected. That only really gets you so far though. So the other thing we can do is because we've got a JavaScript interpreter which we're trying to get working on this watch, we can just compile the version of the JavaScript interpreter that doesn't access any external hardware. All it does is you flash it on there and it appears as the Bluetooth Low Energy device. You connect to the Bluetooth Low Energy device and you can issue commands. You can say turn pin one on, turn it off, turn pin two on, turn it off and you can work your way along until something happens. Maybe the vibration motor goes or maybe the heart rate monitor lights up which is what's happened here. And then you can just keep a note of that and you can start using it in your code. You can do similar things like reading the state of all of the pins and telling when a pin is pressed. So you press a pin, do it again and then you should see one of the pins changing state and you can mark that down as the pin to the button. Some things are a bit more complicated though. Like maybe there's a touch controller or the displays which often need information sent to them when they start up. So things like telling them about what contrast they should be, how the display controller is wired onto the display. And these things are very, very difficult to get hold of normally. So what we need to do is we need to see what the original manufacturer was planning to do. So luckily a lot of these watches do come with ephemeral update and ephemeral update is a standard Android application. It's an APK file which is just a zip. So here I've unset the zip to find the files in it and look for hex files. And there are a bunch of hex files and this shows you kind of the level we're at because this file which is sent along with legitimate watches that are sold all around the world still has a program called Blinky. It still has these things called BLE app HRS and things like that which are actually names of example projects in the Nordic Semi-Conductress DK. So they really haven't bothered too much. So in here we actually, we can find the actual firmware that's supposed to write onto the watch. So normally they flash a bit in the mic controller that stops you reading out the program data so you can't see what's on it. But if you've got a firmware update then the firmware is actually completely unencrypted so you can just pull it straight out of the firmware update. At this point you can look inside and you can see a bunch of data. There's obviously something going on here because there's a pattern. And actually if you look at the chip data sheet then you start to see a lot more going on and this is a standard thing for Armour Cortex mic controllers. The first few bytes of program memory are over a jump table basically. So you've got the stack pointer value that it should be at right at the start so that's effectively telling you kind of how much memory you've got available. And then you've got a reset handler which is what the mic controller jumps to when it first starts up. And then you've got a bunch of other handlers for all kinds of different things. So we can know that if we follow these things down we can see at certain areas of that file there are actually going to be certain bits of program code. Following that through you could do with various disassemblers and just go through the file manually but there's a really cool program called GuideRid that just really helps with this. So it's developed by the NSA and I believe it's developed for finding holes in other people's software. But it's really useful for we do. Completely free to download it. And you start it up, you load the program file in at the correct offset and you get something like this. So this is the same vector table that you saw before except now it's discovered there's a function of teacher address, it's named them all, if you double click on the line it'll jump to it. And we can really start digging down now. So we can now start from the software side of things, look at this big blob of code and finding actual bits of program code within this big blob because there'll be graphics and fonts and all kinds of other stuff in there as well. But that still doesn't help us get at the actual hardware. However looking at the chip data sheet again we see things like this. So this is something about the GPIO and GPIO is the general purpose input output. This is like if you have a pin on the chip and you want to choose what voltage that pin is, whether it's 3.3 volt default or whether it's not fault, you basically you just write to this address in memory. So they have a peripheral, maybe more than one peripheral and then different addresses. So this one starts at address 50000. So if you want to write to the GPIO port you can just actually, if you write a 32 bit value to 50000504 then that will set all 32 pins of the mic controller to whatever the 32 bits of that number were. So this is a really easy way to, if something's writing to this in the code then you know there's something going on. So all you have to do is you do a text search. Here we're actually searching for the 500510 which is the input. We can see it's reading from that pin and we can see that the code is reading from the pin shifting right by a certain number and then adding by one. So we're pretty sure that this function is actually a read pin function and so I've named it GPIO read. Then you can look at all the code that calls this function and you can start to see what's looking at the state of the pin. There are all kinds of different addresses of peripherals for ways of communication like I2C, SPI, serial, analog. So for instance, if you're looking at analog on a watch it's probably trying to read the battery voltage. Then you can look at the code that's out, you can find out where it sets the pin and then you can find out what pin the battery voltage is on. For serial you know that the GPS chip is almost certainly serial and that's the only serial device on the chip so you can look in and you can see, oh, the GPS is set to this. And while the chip has readout protection, nothing stops you from completely re-flashing it, erasing everything, erasing that readout protection, then writing their code back onto it and then running a debugger and stepping through line by line and checking the values of all of these peripherals in the chip and what's going on. So what other stuff do we have in that file that will tell us what's going on? Well, we've actually got text strings usually as well. So for instance here we've got one that says buy. So you know that if you use the watch you'll see it says buy just before it shuts off. So if you see this and you see what code references it then you know that basically after it's drawn that it's going to start shutting down everything from the watch and you can see what they do. In the same way you can see a low battery right at the top. So you know that if there's code that does that, somewhere before there's probably code that says if something called low battery and then you know that that function is the one that checks whether the battery is too low. So this brings me to like a really interesting thing about compilers and in JavaScript you don't generally see this because whatever it turns into is completely hidden for you but in CS it's really good and I don't think many people do look at the disassembly of their code but they kind of should. So this is a really good example, the BangalJS hardware itself is designed quite nicely so that they've said in order to get this LCD to run really quickly we're going to wire it in parallel so you're going to have 8 data bits going straight out of the chip into the LCD. So that's like a quarter of all the IO on the chip is dedicated to the LCD in fact more than that but it's carefully arranged so that if you write a byte to a certain address it will send all that data at once and then you just have to toggle the clock line and they thought they were being really careful with this and they wrote the code to do it and they thought that when they sent all this data which they did by sending directly to the address you can see in here it's actually in fact in the disassembly the compilation of it you can see they're writing directly to the address and then they try and set the pin and reset it but those calls are not in line they have ended up calling a function which is also a little bit inefficient so if they just managed to get these calls in line by changing one compiler flag the LCD would have run roughly twice as fast in their watch software which would have made it look a lot faster than they were actually managing to get out of it. So yeah it just shows it's a really good idea to look at what your compiler is doing and not actually take it for granted. So at the end of the day we end up with something like this. This is effectively showing what hardware is on which pin and how it works. Once you can kind of write this in code in the firmware you can actually start to target the device and make it do really interesting things. So Espino and I suppose there's also devices as well there's a configuration file per device so you're literally just writing it into a Python file what all these things are what the name of the controller is what pin it's attached to and hopefully all being well if you get it's got the code in there it'll compile it in and you know that was something that works. So now we've looked at kind of the hardware and how we might get something on it how do you actually run JavaScript on that hardware. So there's a bit of a problem because if you look at V8 they've tried to make a light mode in Chrome and you can see here they've done a good job getting down the memory usage but it's still using about 6 megabytes even in light mode and we have 64k of RAM in total that's not heap that's heap and stack so that's 100 times less than one instance of Chrome is using. So there are other options for engines that can run JavaScript v7 duct tape jerry script I think multiple have one as well but basically these things if you look at the description of them they all say will run down to 64k of memory and that's like the absolute minimum that they will run down to. So realistically that's a kind of hello world thing it's not like a proper usable JavaScript experience that could then also have enough free memory to run a whole watchOS. And 8 years ago when I was looking into trying to do Esprino the situation was even worse so just some of these weren't available if they were they weren't properly built out so the only solution was really to try and make something. So I started work on Esprino and it was actually based on a scripting engine for a music visuals program that I'd written. So yeah it's all open source MPLv2 it can run in 128k of flash in under 6kb of RAM and that's a vaguely usable amount of memory. Currently the BBC Microbit version 1 only has 16k of RAM available 10k is used by the Bluetooth stack and you can still run Esprino on there you can still do things with Bluetooth. So it's not full ES5 but it's most of the stuff that you would be interested in using plus a few ES6 features as well. So if you look at esprino.com forward slash features you get a list of what's there and what's not. So to give you an idea of why Esprino is made the way it is it helps to kind of look at the difference between a PC and a micontroller. So the end result is that you've got maybe like a hundred times more CPU power that's being generous it's actually a lot more than that because the modern computers do a lot more instructions per clock but it's around that. But you've got basically almost a million times more storage on the device. So if you look at the amount of bytes you have per megahertz it's drastically different it's like a thousand times different on a PC and a micontroller. So it means like on a PC you probably really do care about making your code run very very quickly to get the best out of the system overall and you can do that at the expense of using memory for caching and all kinds of other stuff. But on a micontroller you have so little memory compared to your processing power that you can afford to burn more cycles making the most of your memory in order to kind of make make good overall use of the system. So one of the things to worry about is memory fragmentation. So in JavaScript you're actually allocating and deallocating stuff all over the place as you define variables and numbers and do maths and things. So it's quite important that your memory doesn't get fragmented. So memory fragmentation is when you might allocate a bunch of memory and then free every other item just as a rough example. And so here you'd end up with a case where you can see after you've deallocated stuff you've got 15 memory elements free but actually you can only allocate a maximum of three continuous elements and it's a big problem in embedded. So what we decided to do is actually use fixed size variables. Recent versions of Esprina are actually squeezing this down much more but there are a few good benefits to this. So one of them is that because everything's arranged in a fixed location, a fixed spacing you can give each one a number. Instead of storing an address you can store an index in that array. And that means that if you've got less than 2,000 variables you can use a 12 bit pointer and you're not using 32 bits like you would normally do in the system. And then you can start to pack things down even further. So it makes garbage collection really fast because you can just bash through it and of course you can still join these blocks together in order to make a big flat memory area if you need it. Just by default you're using fixed size blocks. The other thing to note is that actually even if you think you can allocate a bunch of different size blocks, using Malik has an overhead. If you allocate 4 bytes with Malik, Malik's actually allocating probably 12 at least because it has to have a memory size pointer as well. So yeah, you're actually, even though it's not making full use of everything all the time it's still doing a pretty good job compared to something like Malik. So other thing is code size. I just started with a simple example of a code that made a Mandelbrot factor. I thought, okay, let's have a look and see how much space this actually takes up. So the normal JavaScript code for this was 300 bytes. If you minify it goes down to 167 and if you tokenize that minified code, so that means if you find keywords like for function, try, catch, all that stuff, then you basically just pack them into a byte. So what we say with Espino is that you can have ASCII variable names. Basically you're using numbers 0 to 127 of the character code for your JavaScript. And then it can store in the remaining 128 elements above that it can store tokenized data. So anyway, if you tokenize it, you can get down even lower. But then if you look at what happens if you compile it to bytecode, SpiderMonkey is 270 bytes. Even if you compile it to ARM thumb, which is supposed to be a compact version of ARM assembly and you compile it with GCC offline with the optimized size flag, it's still 290 bytes. So you can see that the minified JavaScript code is actually more compact by quite a long way. Okay, so what does it actually look like? So here's an example of some JavaScript code actually running on a very, very restricted memory area with Espino. So it's just 700 bytes. So I'll just bring up the actual version of this running. Also you can see as I type on here that the, you can actually, oh this is a massacre here, you can see that as I'm typing more characters, you can see that the memory is filling up with bytes. And if I hit enter, we'll see that all executing using a little bit of memory temporarily and then putting it away again. So we can in fact, if we try and define an array for instance, you'll see that we're doing string by executing, we'll see that suddenly it's allocating a bunch of memory and you can actually see the indices here in binary. Looks kind of familiar up here being used. You can see how, you can even execute regular expressions in such a small amount of memory. It doesn't really need very much apart from a small amount of execution stack while it's doing it. So yeah, it's, you know, this is 700 bytes. We've actually got 65,000 available on the Bankel JS, which means you can do an awful lot more. Okay, so. So when it's on the watch, this watch has charging connectors and these things on the side are just magnets to hold the charging connector on. You have basically two contacts on the back so you can't physically connect to the watch. In order to try software to it, you need to use Bluetooth Glare Energy, which is the only radio that's built into the watch and actually there's something called Web Bluetooth, which is really, really nice. It allows you to access Bluetooth from a web browser without having to have any drivers or anything and it's supported on a huge amount of devices. It's not built into iOS because unfortunately Apple haven't implemented it yet, but there is an app called WebBLE which provides a web browser instance that you can actually get Web Bluetooth in. So yeah, what does this look like? Yeah, it actually looks like a standard website. This originally started off as a Chrome app, so it kind of installed inside Chrome. And then there's the APIs that develop. We've just been able to turn it into a standard website. Again, just using JavaScript, HTML, CSS. You can develop all your code in here. You send it to the watch. You can access the file system, do all kinds of stuff. But you can also use something like this, which is the App Store. You can just take this, run it on your phone, whatever, and you can upload applications directly to your watch without having to have a programming knowledge. Now, this whole thing again is just written as a static website using JavaScript. So the really nice thing about this is that this is served off of the Bankel.js website at the moment, but the development version is served off of GitHub pages. So if you want to write your own app, it's trivial to just basically fork the Bankel app to repository, add your app, maybe delete apps that you don't want from it, and you have your own personal app loader served off GitHub pages. You haven't had to rent a server. You haven't had any setup. You've literally just forked and enabled GitHub pages. And so regardless of what happens to Esprino or any choices that are made, this will continue to exist and still be usable regardless. So to give you a bit of an example of what this actually looks like, this is some code to create a speedometer using Bankel.js. So most of this is actually just handling the graphics. You are turning the GPS on up here, and then you're just handling the GPS here. You get the speed, and you choose how it's formatted, and then you draw it with the graphics library. And this is something that you could deploy to the app store. In fact, I think it is in the app store. And you have a speedo that you can just split to. So if you have a Bankel, that's fine. You can do it. If you don't, you can actually just run an emulator. This emulator works because Bankel.js and Esprino is written for a reasonably low performance mic controller. We can just compile it with M-scripten. So if I bring this up here, you can see this is the IDE. So this is the actual Bankel emulator window. If I run this code, it should just draw a line across there. But we can interact with the rep as we go. So if I, I suppose we're just going to do a counter. And I'll do it in the rep, or rather than doing it in the editor. So if I define a counter here, or the second, we'll just increment the counter. And then we'll clear the screen. We'll set a vector font, because it's a sort of scalable vector font. We can make it nice and big if we do that. And we can set the alignment up so that it's kind of in the middle. And then finally, we'll draw string, the counter, and in the middle of the screen. And that should be all we need. So if I set onSec, it'll display the number. And then I can say second tool. And we've got a nice counter that's counting up. And the nice thing is that, like, and this will be no surprise to anyone else who's done a web development, but this is all happening in the background. It's only using a single thread, but each task is nice and small. So I can still interact with it, and if I don't like this, I can maybe change the font size and I can re-upload that function. And then the next time it calls, it'll all be updated. So it's just a really nice way to work with hardware. The alternatives would have been really painful. This isn't that you can just pick up and use. And this code here, you can dump the current state of the interpreter. And now this should be something that we can actually, we can upload directly to a Bangal GS watch and have the actual hardware working straight away. So the Esprino software, it runs on a whole bunch of different devices. These are some of the different bits of hardware that we've got that actually runs on. There'll be more coming. Because it's open source, it's available through a whole variety of different things, like ESPHG66, ESP32. These are development boards that are only, they cost like five euros delivered or less. So if you really want to have a play with embedded software and you haven't yet, it's a really, really good idea. Because, yeah, five, you know, it's the cost of a beer and you can have a play around. And hopefully you can have fun turning some lights on and off and doing some cool things in the real world. Right, thanks for watching. Hopefully some questions coming up soon. And yeah, hopefully chat to you a bit after. Thanks.
How I reverse engineered an off the shelf smart watch in order to create Bangle.js, a watch that runs JavaScript. I'll cover the process as well as some of the hacks Espruino employs to run JS on a device with only 64k of RAM!
10.5446/53573 (DOI)
Welcome. This is my first time I'm talking at Boston. You can follow along the slides via the link. By the time this airs, I hope to be able to write subtitles for download. The content will be chose to speak and knows you can find either on the right hand or at the bottom, depending on your screen. My name is Andre Yinesch. I'm currently working as software engineer at Jambird. It's a medium-sized software provider from Germany. This talk has nothing to do with my work. Since this reveal.js presentation is poorly recorded using OBS Studio, please ask your questions at the end. In between, you will find scenes taken with screen cam on Android. I would welcome if you would allow me to add your comments to the presentation slides afterwards. Please tell me if you want to opt in. Unless otherwise noted, the content is made available under the Creative Commons Attribution 4.0 International License. Let's start with responsive web design. I'm quite sure you ran into this picture in some form before. It's a typical marketing image to depict a web app that's usable on different devices by adjusting the layout accordingly. But what if I told you the foods approach took such short? How would you classify this Samsung Galaxy Note 3 with a pen? Is it a mobile? It's small enough to be counted as one. But you have a pointed device, the pen. So designing for it like for a desktop. You would waste real screen estate otherwise after all. While I was preparing for a game jam in early 2020, I ran across an article on Mozilla developer network regarding Zenfra APIs. I was aware of the ability to recognize motion and orientation for my days with Firefrogs. But by now there are more Zenfra APIs available. This talk will show you some tech demos. They are not ready for production usage yet in general. Since normally you need to total a flag in the browser to get access to the underlying API. But that shouldn't stop you to show off what's possible on the web. My first demo is called What's a User Doing? This is showcase I ran into years ago. It highlights a bit on privacy than the data can have. You can see, I can tell whether the user is sitting, watching, taking a picture. I was even able to test like I laid down the phone on the desk. This is not normally not data you want to hand out normally without second thought. After my game jam, I decided to create a small demo site which runs some primitive checks on your device capabilities. Note that some of this is done with CSS and some with JavaScript. So you have media queries for example for the over, for the tech pointer devices, aspect ratios, colors, orientation. I'm not sure what colors is meant to do, but I can think of use cases for example for optimizing for e-book readers. Since I created this page, I started to design tooltips based on the hover media query instead of guessing from the screen size. Orientation could be used if you have certain criteria. For example, you have a sidebar which is needed and the thing you want to show us next to it. Device motion is the movement of a device in the room. Orientation is how you hold it. Accelerometer measures the velocity. Ambient light level measures the light, the relative light around the device. I have a small demo for it. Web Bluetooth is available in Chrome but also only kind of lag. I got something prepared for that for you. Geolocation should be known from for example map apps. Drive-thru is a compass, so to speak. We saw it in the user activity demo just now. There's also the magnetometer which measures the magnet field and near field communication which you could use for near field communication. For example if you have something close to it and something not shown here is voice recognition API. I have something for that as well. While programming the slides, I discovered this web site called whatcanthewebdo.today. I cramped the earlier checks on the screenshot but normally it's a bit better laid out. You can see that my laptop has limited capabilities but I can do something. My next demo I promised is about ambient light sensor. As you will be able to see it's recorded with Chrome and I have to flip it flag. I'm also showing you what's happening. The flag is not activated. Once you switch a flag you need to restart Chrome and here the page reloads and you can see from the black square it's turning white. When I hold it close to the lamp I have here and it turns black otherwise. If I deactivate the flag back to the default, Chrome closes, reopens, I reopen the page, scroll down again and the IntendD is unknown. I can imagine that you want to adjust your color palette when you have access to this data. For example in a broad sunlight you need to have more contrast or if you are in a basement or in a dark environment like me right now you could turn on some other colors. This could improve the reading experience. My next demo is about Bluetooth. Did you know you can use Bluetooth with a web browser? It's a debilit topic but that shouldn't stop you from a true tech demo. Sadly I didn't have the right equipment like used for the demos I found online so I show you a scan with Bluetooth for peaking around me using low energy. That is the output of the speaker up. If you don't activate the flag you will throw an error. You also get a negative experiment if you don't have Bluetooth activated. So if I switch on the flag and activate Bluetooth and reload the page I can see I need to couple my device and now I can see some output like the device name, the transmission power and so on. I found another demo which also we need to couple first and then you scan and below okay cut off here below you would see different devices and their signal strength and so on. I will add the links to those demos to the presentation talk site. My last demo for now is the voice recognition. I am going to use Chrome like mostly we find recommended but know that you can use Firefox also. In the Mozilla rigi I linked here there are two flags you need to switch in the about config and then you can use voice recognition of Firefox as well. However Chrome used to be a bit more reliable in my test. For that I need to make it available. I am using Nyan which is a really old library from the early days of the web recognition API. It is a small JavaScript library which reads about two Tito bytes but it supports multiple languages and has no other dependencies. You can try it for example by saying hello. Okay let's reload the page. And other pages I saw for example are on code pan but okay it's loaded hello. Checking microphone is activated. Show me huge kittens. Show me arch natural park. Okay that's the demo effect. I will cut another video in between. Let's jump back to Firefox. Moving on. One thing I want to tell you about is the project Fugu. It's an initiative at the Chromium project to push the work forward by providing more access to more than the data. For one to catch up with the native API apps but also to enable new scenarios you cannot even think of. Something to say there's a pushback from Apple or Mozilla for example. If you're interested read this blog from actual raffle called Clatsom HL's agency theory. Within it you will find for example different sensor data which will be made available within the project Fugu like cellular, Wi-Fi, Bluetooth and FT radios or other sensors like camera, photo or video, microphones, proximity, temperature, accelerometer, ambient light, barometer, location that means GPS, a compass, so a magnetometer, orientation, digital strobe, touch digital data, the image that means fingerprinting or infrared mapping but also GPO accelerated, graphics, computer and media encode, decode or vector instructions. There's also a status page for the project Fugu. With that I'm showing which images are used from where and I'm through. Thank you for listening. Do you have questions?
For years now, we associate Responsive Web Design with Media Queries which adapt to the width of the device we are using. But what if we can take this one step further? Modern devices are brimful of sensors. The fun thing? There are JavaScript and CSS APIs which allow access to some of them! This talk will introduce you to some lesser known Web APIs and give examples on how you can progressively enhance your design with sensor input! When I did some research for hackathons, I noticed on MDN Web Docs, that there are APIs for device sensors, that rarely get used. The devicemotion and deviceorientation API might be known from Firefox OS days. But you can also use Ambient Light, Voice, Colours, Pointer devices, NFC or Bluetooth - if the device supports it. This talk is using a Fairphone 3+ as mobile device to demonstrate some of those APIs. Part of them are available on Firefox, others on Chrome. You might need to toggle some flags to enable them. This means, it's not production ready yet - but you can build interesting demos to convince browser vendors to refine the APIs for production! An idea might be to adjust the contrast or font to the environment you are. Or the position you are in. Or the devices which could be detected around you.
10.5446/53574 (DOI)
Good morning everyone. Welcome to FOSDEM. My name is Alon Mironic and I work for Synopsys, where I manage R&D for the Seeker agents. Seeker, in case you haven't heard about it or rather haven't heard about it yet, is probably the best IS tool out there today. But as fascinating as IS may be, it's just not my topic today, so this is the last time I'm going to mention Seeker. Instead, today, I want to talk about DOS or Denial of Service attacks against Node.js applications. Before we start, a quick shout out to the inspiration for this talk. A couple of years back, I attended my first security conference, Global AppSec Tel Aviv, and I got to listen to the keynote talk by Asta Signal. In that talk, she basically said that if we, as a security industry, want to educate developers about security, it is not enough to have security conferences. We as security professionals need to put ourselves out there. We need to go out and meet developers in their natural habitat and talk about security topics in developer conferences, not just security conferences. So in my little corner of the world, this is what I am going to try and do today. Before we jump into things, a couple of technical refreshers. First of all, the Node.js event loop. This is a close simplification, but for the purpose of this talk, it will probably be good enough. Node.js essentially has a single thread, the event loop, which governs the flow of the program. By default, all the user code, or at least all the JavaScript user code, is executed on this event loop. Unless it is delegated to do something asynchronously on a worker thread. In my team, we call Node.js a platform for grownups. Because if you understand this architecture, if you cater to it, if you write your application in a way that takes advantage of this architecture and write applications that fit this architecture, for example, web applications, which are characterized by short bursts of CPU intensive work between long bursts, long IOTASKS, which can be delegated to worker threads, Node.js can be ridiculously fast. If you insist on writing your code in a way that does not cater to this architecture, Node.js can be painfully slow. The second topic I want to touch on is denial of service or DOS attacks. Now, a DOS attack is any form of attack where an attacker sends a request and causes it to consume large amounts of system resources, be it CPU, memory, IOTA, whatever. And since this illegitimate request is consuming all of these resources, the program cannot serve legitimate requests. In Node.js, this can be especially painful because if an attacker is able to tie up the event loop, the application will be unable to serve any other request as long as this current chunk of work is being executed on the event loop. So that was a lot of high level concepts. Some of them may be new to some of you, some of them may not. Let's see a few concrete examples. So the probably most famous form of DOS attack in Node.js specifically and in JavaScript in general is denial of service by RegEx or Redos. Like many other languages, JavaScript, Node.js has a RegEx class and what may not be immediately obvious is that this RegEx is evaluated on the event loop. So I have a really simple application here. It receives two query parameters, RegEx, RegEx, sorry, in text and evaluates the text against the regular expression. Seems innocuous, right? Wrong. RegEx's are ridiculously powerful and although you may intuitively think that the complexity or time complexity of evaluating a RegEx is linearly dependent on the length of the text, it is not. RegEx's or PCRE's at least, Perl compliant RegEx's like the one we have in JavaScript, support backtracking, which is essentially the ability of a wild card or meta character to refer to a previous group that may or may not contain a meta character. That's a lot of talk. Let's see an example. So I put together here an intentionally stupid RegEx. This RegEx, if you read it, means that the entire string from start to end contains one or more groups of one or more A characters. And then I constructed a string which would be potentially difficult to test this RegEx against. This is a series of A's followed by a B. Now, if you try to imagine a string of 10 characters, just try and think how many permutations a RegEx engine needs to check. 10 characters can be a string of 10 A's or 10 strings of 1A or two strings of 5 A's or a string of 1A, another string of 1A, and a string of 8 A's and so on and so forth. Now, it's been a couple of years since college and it's way too early in the morning for me to be doing math, but you can kind of intuitively see that this can get really complicated really fast. So I indeed ran this benchmark, graphed it out, and you can see it gets really bad really quickly. It's all kind of okay until about 29, 30 characters. At 32 characters, this RegEx takes almost a minute to evaluate on my machine. At 35 characters, it takes almost five minutes to evaluate on my machine. And to be honest, I was trying to listen to music on YouTube while I was working. And after 35 characters, I just killed this benchmark because it was taking too much CPU and making the lag in the music unbearable. So this is definitely a real problem. What can we do about it? Well, frankly, quite a lot. First of all, and kind of obviously, if you can, do not allow tainted inputs, i.e. user-controlled input to be evaluated as your RegEx. This isn't always possible. For instance, if you have a search functionality where a user can input wild cards, they usually get translated to RegExes. So this isn't always possible. But if you can avoid it, do. If you can't, sanitize or check these RegExes and make sure that they're safe. There are a couple of packages that can do this. For instance, safe RegEx, but a quick search will help you find something that fits your needs. Second, kind of in the same line of thought, if you can, don't allow tainted input, user-controlled input to be evaluated by your RegEx. This is usually not possible because we don't usually use RegExes to evaluate our own data. We know what our own data is. We use it to evaluate data we do not know. But if you can, avoid it. If you can't, at the very least, use length limits. Length limits are really easy to check, they're really cheap to check. And if you can apply a length limit before sending your string, before sending the string to a RegEx, you can eliminate a lot of that access because as we've seen, evaluating RegExes gets worse as the string grows. And of course, the best way to avoid tasks by RegEx is to not use RegExes at all. If you can, take a look at the RE2 package. This is not a PCI-RE compliant RegEx engine. It is considerably less powerful than the built-in RegEx. But in exchange to this loss of power, it does assure that the evaluation of a RegEx is linear to the size of the string being evaluated. So a lot of the denial of service behavior just disappears here. If that doesn't apply, take a look at packages like validated.js. For a lot of your needs, such as validating emails, URLs, usernames, passwords, whatever, there are built-in packages or available packages, available functionality that can already do this validation, either by using RegExes, which have been tested as safe or not using RegExes at all. So unless you absolutely need to reinvent the wheel, don't reinvent it. Just use a wheel someone else already invented for you. So those were RegExes. Let's look at another type of DOS attack in Node.js. And I am, of course, speaking about JSON. So once again, I have a really simple web application here, which takes a JSON input as a post request, passes it, and returns the number of keys in the JSON. Again, what you do not see here or what may not be immediately obvious is that this passing is done on the event loop. So of course, we need to ask ourselves how bad is this really? And again, I have a really simple benchmark. I construct the simplest JSON possible, which is string of arrays surrounded by double quotes, making this a legal JSON of various lengths, of course, passing it, and I graphed out the results. So the good news is that the time it takes to pass a JSON object, sorry, a JSON string into an object is linearly dependent on the size of the JSON. The bad news is that the time it takes to pass a JSON, a string into a JSON object is linearly dependent on the size of the string, and that JSON is everywhere in JavaScript in general and in NodeJ specifically. Unless you're writing an extremely simple application, chances are the way this application communicates with the external world is via JSON, especially if you're using APIs and not just HTML frontend. So while the effect here is still linear, JSON is everywhere. JSONs, especially for APIs, tend to be significantly larger than the 30 characters we've seen in the regex example, and you really will have a hard time avoiding JSON altogether. So what can you do? Of course, if you can, do not pass, tainted, or user-controlled JSON input. As I said, this is usually not realistic. But what you can do is, as always, use size limits. Most web frameworks such as Express or Happy or Fastify have built-in limits for their JSON passes, and if you'll excuse the language, most of these built-in limits make very little sense, at least to me. As far as I recall, Express's built-in limit is, or default limit is 100 kilobytes, Happy and Fastify allow for megabytes, and if your application needs to pass 100 kilobytes or a megabyte of JSON, more power to you. Go for it. But if it doesn't, why allow it? I've seen log-in forms where you take a username and a password, let's say eight characters each, another bit of overhead for the JSON format. 30 characters? Pack them as JSON, send us to the server. If your legitimate input is 30 characters at most, why would you allow an attacker to send 100 kilobytes or megabytes of text, have it passed on the event loop? Don't. Get to know your applications, get to know what a reasonable input is, allow for some overhead for edge cases, and limit it at that. Don't let attackers go wild with the sizes here. Now, if you aren't using some built-in framework, and you are doing the JSON pass in yourself, you could consider using a third party such as BFJ, Big Friendly JSON, or JSONStream, instead of the built-in JSON.pass. Now, and just to stress this, this will not necessarily be faster than JSON.pass. Avoiding denial of service isn't just about speed. Avoiding denial of service is avoiding or preventing an attacker from tying up the event loop. Both BFJ and JSONStream delegate this passing to a worker thread and do it asynchronously. So even though the total time to pass may take a bit longer, it will not be done on the event loop and will not block up your application. So, so far we've seen two examples of how an attacker can create a DUS attack based on the payload they're sending. A different kind of DUS attack, which is not dependent on the payload, can be achieved by using synchronous APIs, which stop the event loop and may hang due to some external reasons. More often than not, although not necessarily, but more often than not, this means storage. So I have a very simple application here. As you can see, there is no user input, no payload to send. You just access the URL and the application will read a text file and return the contents of it, which is Lorem, Ipsum, etc. etc. Now, what you may notice here is that this reading is done synchronously. We call readFileSync. Now, this may be very fast or we don't know what's going on in the storage. It may be, this storage may be served by a misconfigured NFS server or something, and this may hang for two seconds or a minute or eight hours. Yes, true story. I have seen storage servers hang for so long. And especially in today's world, where we don't always control the entire infrastructure because we are moving our applications to the cloud, we should never assume anything about storage network, any hardware capabilities, and we should always default to protect against this. What does this mean in our world? Well, generally speaking, when we handle storage in NERJS, there are two ways of doing this, two types of APIs. First of all, there is the asynchronous way where the request gets delegated to the operating system, a work-affirmed monitor set, and when the operation is done, we get a call back to the event loop with the appropriate data, read, return, whatever, with the return value from the operating system. We see this both in built-in modules like in the FS model, we have read, deal, and write file. We see this in third parties, which do storage operations or heavy computational operations like ProteaFS or FSXTRA or ADMZIP, etc. So we usually have APIs to do this. Now, if one way to access storage is to use asynchronous APIs, well, naturally, the other way would be the wrong way. As I said, we do not know when anything may or may not hang. If there is an asynchronous API, we should always default to use it. Or at the very least, if you are using the synchronous flavor, have a really good reason why you are doing this or really good explanation why this is completely internal and cannot be invoked by an outside attacker. Now, I think I'm getting close to my time running out, so let's summarize. First of all, any code that relies on tainted input, user-generated input, should have some sort of sanitation, validation, protection, whatever. And this is a general good rule of thumb, not just for dust attacks, but for a whole range of security considerations. Second, a lot of functions have synchronous and asynchronous variants. Usually, this relates to IO storage, but not necessarily. We have similar pairs of APIs in cryptographic APIs and child process and a bunch of APIs. Whenever you have an asynchronous API, you should probably be using it. And finally, as developers, we are really good at testing. We have become really good in writing CI pipelines. But unfortunately, a lot of developers treat these CI pipelines as a way to validate functionality. And that's just a part of our job. If we are also responsible to validate the security of our applications, it means we also need to test it as part of our CI pipelines and not wait for pen testing once a year or once a release or whatever. Now, I'm not going to promote any specific tools. This is not a sales pitch. Suffice to say, there are a bunch of excellent security tools, both open source and proprietary. The good ones integrate really easily into your CI pipelines. Do your research, find the tools that fits your use case and incorporate it into your CI. With that, I'll leave you with a couple of links. If you're not familiar with Node.js's guide about using the event loop, I do recommend that you read it. All of the demos and the benchmarks I've shared here are available on my GitHub. Feel free to clone it, play around with it, and of course, patch is a welcome. And with that, I will open the floor up to questions once this recording has ended. I do encourage you, if this is interesting to you, if you want to continue discussing, reach out to me on my email and Twitter and LinkedIn. I'm not an easy person to find, sorry. Bound means be my guest. And thank you for your time.
Node.js’ single-threaded nature makes it very susceptible to DOS attacks. While Node.js’ event loop allows performing some operations in an asynchronous fashion, it’s still quite easy to write a vulnerable Node.js application by making a few simple mistakes. In this talk I’ll cover some common ways a Node.js application may be vulnerable to DoS attacks and some common best-practices and counter measures to defend against such attacks.
10.5446/53578 (DOI)
ע dona דזבקusst מלא aspirhe עד כאן proportionsты עד כאן עד כאן Zurab אז אני רוצה להתחיל? מהницаll כ racket setting בין 1899, אז הילם הרבה זה שrente קטינסל מילורability זה פגנת לעשות מוסק המצ sampai全 euh, תרונדק נתעד triumph actually לפגור поэтому לתת מזיס באוסף שאנהhigh Govern פ הרendericious meters got seeing the cat back in the cage and what drowned act noticed is that the time it takes for the cat to release itself gets shorter and shorter in time until eventually it seemed like the cat was just understanding the connection between the problem with the cage just running "...lipStop נתבצע....ל GivenCode test witnesses....<|pl|><|transcribe|> טליון קודם reliably אבל והcushe pumping filled אחיא בקאטDoes not act intelligently Bowen�� dare does not understand The real connection Between the panel and the cage נראהoten été שמשר הדלברת היהiron twin Petersy establishing strand was a הtoire iemand שירי והסתידה тоה glitter Which was probably one of the most famous psychologists in history. And he came up with another learning model which was called the reinforcement learning. So reinforcement learning is based on the trial and error learning. בו לא seatedizzy Почему זכtan בו לאудום בעק כל הח Conseil הילד regrets יוםemos shutting thickness trips 불�öz עק shops שחקשו הפ Newman הנת деньloadอะ פקretched succeeds יהד celle בmışי דבר женщ ואם אנחנו מקטעים לע invari involvingdefra conversation, painting again on web users, שזה regularly אני практиk שיהיה רגSPEAK取use Nation on Web users to workloads orique users By producing random sounds And only repeating those words That are reinforced by their parents Let me just show you a very cool video made by beav skinner Let's see that The two pigeons are at either end of a small ping-pong table One pigeon pecks the ball as it comes toward him And knocks it toward the other pigeon Another pigeon pecks the ball andyyyyy back across the table אם זהLET בכל מח updated כrop missing why we're seeing all of this this is the basis of what we want to do when we bring reinforcement learning which is concepts from psychology into machine learning אז אנחנו רוצים gewe Abigail sympath condue ליה cher לנטpping בוונות. So the learner performs actions upon the environment, and in response, the environment returns this feedback to the learner. So the learner can perceive the reward that he gets by doing that action, and the new state that he moves to in the environment by doing that action. so the agent and the environment constitute this continuous feedback loop between them. Now, in order to bring reinforcement learning to any computable environment we need to find this mathematical definition for it. So the Markov decision process, or in short, MDP הוא dueס наконец כן. הוא ד<|hu|><|transcribe|> עד after שני שני א part שני שני א.... שהיה פה הפняяultה שאלה Endилась של מהות הת 쓰יע underneath bean. אני ר específic אל בינ זה אל проверיש בשרור להולך עבור את האנש ребенות נעמר לו מנז סית נضבע עלת חלט לעשירה כ medicine נראה sprouts חלטי שאלה Correct웠 Mickey אנ plugs וơ Strfidos יצרים Under consumers, סוג леж<|ru|><|transcribe|> לגister, כ Ratriš Alejandrocapitino adventurous צטט Youtro, חppersia wiring blanking, ו crois הזה אופק marker Franklin Joseph brauchen ד подобות ירוש עשה בגמי הק addictive אני מגיע ש acquiringBeckDownDEFG אם הפגעת הטליסטים והתפ опять רבו בפ promotes פeticindistinctble ללכן וותייס thirty farm פ esté נמיי isological policy feitoATHAN ניד סורונים או הלכן Enemy אי gin תח Plan T תאת אי אע אע אע אעmist cliqueœ אתרט Shakes הקי dovרasters. קווторы שאנחנו עלייקים不知道 kun שמעיה אתully כ wherein the agent or the learner does not understand or cannot compute or predict the environment in which it acts. So we call it the model 3 reinforcement learning because we do not have a model of the environment. Therefore in those cases the only thing that we can do זהaddולות deaths. העiane content.oid?.,.,..acking Number of different scenarios based on the respons텐eral third standard action for allocated Did you know thatибо As an observer in the קודם לג Ripa input, בע вы את ה müssen מונטציה ושל Furonitate paragraph場gamans locked up להגיד ואשר ו 그냥houseMM appears kindly donÐd for any state any action in that graph we would be able to compute this value and that value represent the maximum total reward achievable performing this action in this date. אתכן לindevis זו למיד התילמיד בסדר, יש לך תן השע 1300 קצת. יקפה disconnect awkward, זאת אומרתי saltslot onfolks.></xSc tests date 172 selected, 1965ה יפה Bharט, יאללה של עםiful打רון Try נפ breakdown.ately challenging. Good. Of course, it is. T. וכן זאת ס Sas31 של השכםarsa, מת Очור ס discard ג widzfahrיג'That's what queue learning does, approximatesВыפ יודעים audience key-requies and action because we don't have any knowledge about the environment yet Now when the agent reaches a state it has to do two things first we need to predict the key values of all actions that are available from this state And then just simply pick the action that has the highest key value And we will talk about this predict thing כי הכל פמובר Gos, trfish אני grew up with his memory, I remember the situation, the agent has memory and he recalls or remembered that he was in state S, he did an action A, he got a reward, R, and he got to the new state S prime, הוא בדיוק נורא בחברת גורל הוא拍סอย sucker негоまあ Strong מושicate שזה הלמין גורל שלד ניקję א היה מדפ seeking מדהיםema כי אתה מתרסה знакомת, אחת שzon klarרה את הד выступת第二 פ insign. מבדת מבחינה שיצור. So it's really similar to how Animal acts And in very rough We can say Because at the beginning of the algorithm, the predictions of the agent will be probably meaningless because it has no idea about environment. האöglichת, תד Gogh לא צריכה אחת thrilled אחת, או לא Dustin הרבה רק מkilוב wordtищ, מומצים ביט говорят, 했던 draußen שמלимер ומאר Lilane Labs, אילו Jaguar Mc נ bubb 같아요, איזה נ של אג свид banker. כשלל חלק בצ vaccin human כחר ise extensive 경 blank gegenüber בארה anymore дорог concerned yeah tank ריחה אשופ harmonic אשופ פigm only as there are nun נורא נדורג נורא נדורג עקב כפונקציין ומה אנחנו רוצים לעשות הוא להפכה את ה-Q-Value כמה נורא נדורג כפונקציין כשבאים את התעריכים של נורא נדורג את תלווה נורא נדורג עם פעמים של אינפוץ וזה נ<|pl|><|transcribe|><|pl|><|transcribe|><|pl|><|transcribe|> תלווה נורא נדורג עקב כפונka trwala na Waterfuł taki się stąd uno ולכן troops獷ו אנטר Rider deixa outlines juices wellnessanych dance with company action cue computed specific action So the agent actually samples the environment and using it as a training data. Now here, this kind of learning is another type of learning, a completely different type of learning, which is called supervised learning. Because the neural network learns using examples. אני דח тонeld ללמד לי את איתו Bild ואנסMerle משר הח BBC יו simultaneous chart היא שאתה יכול להשת 360 sägaפתOUR So using the formula which we call a bellman optimality equation, we can compute the q value of a certain action and a certain state and this formula says that the q value is the sum of immediate reward that agent has received upon doing that action and the maximal q value from the next state and because we know the reward and we know the next state, then we אנחנו עד כמה כך נתקל את הפורמולה. אני רוצה להגיד על איזשהו אינטואייה בין הפורמולה. אז אם אנחנו already know that the agent has done action A, then it must have got this immediate reward for doing that action, and that's the R-factor here. עד כדי, because we know that the agent has moved to a new state, and because we know it follows the optimal policy, then it must have chosen the action that has the highest Q-value, which is exactly this max Q-factor. Now, one thing that we need to notice here is that we are using the predictions of the network in order to compute the training data, because the Q-value of the next state is not given to us by the memory of the agent, so we need to use the prediction, the neural network in order to predict it. Now, there are some problems that can occur due to the fact that we are using the same network to predict values and to train the network using those values. And one of those problems is the moving target issue, and the way to resolve it is to use two neural networks instead of one, but we will not discuss this solution now. Now, another important issue that I want to bring up is that this recursive breakdown that we did, and it's recursive because we use Q to define Q, is only possible here because the MDP problem satisfies what we call the Bellman principle of optimality. So there is a small class of optimization problems that can be decomposed in this recursive way just because they have this unique structure. The simplest problem that I can think of that satisfies this principle is the problem of finding the shortest path between two vertices in a graph, whereas the problem of finding the longest path between two vertices in a graph does not satisfy this principle. So I think it's interesting to understand the relation between finding the shortest path in a graph and finding the optimal policy of an MDP. I can just say that whenever we examine the shortest path between two vertices in a graph, we find out that it is combined from different vertices, and if we choose two vertices from this path, we will find out that the path between those vertices is also the shortest path between those vertices. So whenever we have an optimization problem where the overall solution can be decomposed into subsolutions and each subsolution is a solution for the corresponding subproblem, then this unique structure is happening, and then we know that the problem satisfies this principle. Now, we have ignored this gamma factor, but it's quite important. Whenever this gamma factor goes between 0 and 1, and if we open up this expression, we find out that this gamma factor reduces the importance, or discounts, the importance of future results, and the accuracy of future rewards. And for most cases, this is important because we want our agent to consider rewards that are in the nearest future and not rewards that are far in the future, because the likelihood or the probability of ever reaching those far in the future reward is lower anyway. Another important consideration that we have to do is the progress of the agent in the environment. According to what we've described, the agent moves in the environment in a greedy way, in the sense that it always chooses the action that has the highest Q value. And this approach can have its limitations because the agent would converge to a local maximum, in the sense that there might be a better solution somewhere in the environment that the agent would completely miss out simply because it goes to the nearest best solution. And that problem can be avoided if we add a portion of randomality or, let's say, noise to the progress of the agent. So the absolute greedy policy means that the agent moves greedily in some probability and otherwise would move randomly. So whenever the agent moves greedily, we say that it exploits the knowledge or the experience that the agent has gained so far. Otherwise, it is exploring the environment because when it moves randomly, the purpose of a random movement is to explore new parts, new territories in the environment that were not discovered yet. Now, this approach has another limitations because at first, as we said in the beginning of the algorithm, the predictions that the agent can do for those Q values or the predictions that are supplied by the neural network are meaningless because there is no memory, there is no experience that can derive very good predictions. And this is why the decaying epsilon greedy policy is often more efficient because in that policy, the agent tends to move to progress randomly in the environment at the beginning of the algorithm and gradually, this epsilon decays exponentially, causing our agent to start exploiting the knowledge that he has gained. So this trade-off between exploration and exploitation can be quite tricky and you will find it in many nature-inspired algorithms. And the decaying epsilon greedy can be viewed in a more intuitive way as the process of maturity of the agent because at first, when the agent is young and has no life experience, it would rely more, it will make sense to rely more on randomality, on being playful with the environment in order to learn, very similar to how children learn. But the more that the agent gains life experience, then it better rely on that experience rather than keeping with the exploration. So another important remark about this topic is that our agent is being used in order to find the optimal policy. But in fact, when the agent progresses in the environment, then it follows a slightly different policy if it uses the epsilon greedy and the decaying epsilon greedy. Because those policies are a mixture of acting greedily, which is the policy that we want to find, and acting randomly, because that helps us with finding better solutions. So whenever an agent follows a different policy from the policy that it aims to find, we say that the algorithm is off policy. So Q-learning is an off-policy algorithm. Now, another important remark to make is that the original Q-learning algorithm did not involve any neural network. Instead, we used a simple table of numbers in order to record the Q-values. So each row in the table would be a state, and each column would represent an action. And then the agent would just update this action, this table using the temporal difference approach, and come up with those converging Q-values. The problem with that was that whenever the environment is huge, then you quickly get performance issues. This kind of implementation is not scalable enough for real-life cases. Therefore, it was necessary to introduce the neural network into Q-learning as a function approximator. So it is more scalable to approximate a function that describes the Q-value instead of using a table that is corrected. So going back to the algorithm, there is another important concept that we need to introduce, which is the concept of episodes. We call that when we send our agent to the environment, we cannot really tell what will happen to the agent in the environment. It might be getting into some infinite loop or being caught up in some dead end. In order to solve such problems, we split the algorithm into episodes. An episode defines how many actions the agent is allowed to make. And once an episode is over, then we just put the agent back in some randomly selected state. That approach is also helpful for the learning quality, because then the agent has the opportunity to explore new territories in the environment in which he might not have reached anyway. Now we can, using those concepts, we can modify our algorithm a bit. So we split the execution into episodes. And whenever an episode starts, our agent first flips a coin in the probability of epsilon. And if, in some cases, it will choose a random action, otherwise it will rely on the experience, meaning that it will ask the neural network to predict the Q-values for all the actions that are allowable in that state. And once it gets the answer, it will choose the action that has the highest Q-value. It will also remember the outcome of performing the selected action. And once an episode is over, we use this accumulated memory in order to come up with training data for the network. To compute this training data, we use the Bellman optimal equation, as we described earlier. So now we have reached our example. This example is called the mountain car problem, and it's a classical problem in machine learning. So we have this agent, which is the track, and it's locked between those two hills. And the goal of the agent is to reach this 0.6 position from the right. Now, the trick here is that the car does not have the sufficient engine power to drive toward this goal in one shot. Instead, it has to use the gravitational force to drive backward, and then using the inertia to continue to the goal. And of course, we're not going to program the agent specifically to do that. We will just rely on reinforcement learning on the deep Q-learning algorithm in order to make our agent understand how to reach the goal. So we start by describing the state space, which is composed of the velocity of the agent and the position of the agent. So our state space contains all the possible combinations of those. And our terminating states are those states that where the velocity of the agent is positive and the position of the agent is above 0.6. Now, the action space is consist on from those three actions that are available from any state, which are driving left, being neutral, or driving right. In order to implement the solution, we are going to use JavaScript and TensorFlow.js library. So the first thing that we want to do is to implement our environment, which is a simple class. The main functionality of the environment is to update the state of the agent upon receiving the action that the agent has done. So we update the velocity of the agent and the position of the agent. Finally, we return the indication of whether the agent has reached the terminating state or not. Now, all this mathematical manipulation is meant to describe the curve on which the agent is performing those actions. So we use the cosine function and the gravitational force constants to represent those physical limitations. Now, the second thing that we want to implement is the rewarding mechanism. So the reward is dependent on the position of the agent. The more that the position is closer to the target, the more the reward is higher. So this is one example of a rewarding mechanism that can work, but, of course, you can find many other rewarding mechanisms that can be even better. It's really dependent on the problem that you're trying to solve, the limitations that you have in the environment, and so on. So in the heart of the algorithm comes the model, and our model is consist of a neural network. So with TensorFlow.js, it's very easy to create a neural network. We start with a command sequential, which allows us to create a network by stacking layers one by one. Then we create the first hidden layer of the network, which consists of 128 units, and we are using here the rellow activation function. Now, in TensorFlow, whenever we create the first hidden layer, then the input layer is created automatically. This is why we need to specify the input shape whenever we specify the first hidden layer. In our case, the input shape is just simply equals to 2, because the state is defined only by the position of the agent and the velocity of the agent. Finally, to finalize the network, we add the output layer. The output layer consists of three units, because we have from any state only three available actions, and we want to predict the Q-values per action. This is why we have only three units in our output layer. Finally, we use the.compile command to attach the optimizer and the loss function to the network. So here we are using the very standard.adm-optimizer and the mean-square-error-loss function. Next, there are a couple of important functionalities that should be added to the model. As we described earlier in the algorithm, we want to be able to use the network to predict Q-values per given state and to train the network using the data that we sample from the environment. So in the first functionality, we use the.predict command that receives a set of states, and for each state, we'll predict all the Q-values that are associated with all the actions for each state that is given. Next, the training part is done using the.fit command, which receives the training data. So the first argument in the training data would be the inputs, which are, in our case, a list of states, and the output, which is the y-batch that is composed of all the computed Q-values per all the actions that are available from those states. So finally, we have the choose action function, which allows us to choose the next action that the agent should do, and that depends on the state that the agent is at, which also depends on the value of epsilon, because we are following the decaying epsilon policy. So as we describe the decaying epsilon policy, we start by flipping a coin, and in probability of epsilon, we will choose an action randomly. So this is the exploration part. This is where the agent gets to explore the environment. Now, otherwise, we would use our network to predict all Q-values for all the actions that are available in this state, and then choose the one with the highest Q-value. So this is what we are doing in the exploitation part of the algorithm, because we exploit the knowledge that the agent has gained. Now, it is noticeable that we are wrapping our computations using this.tidy command. And this is a technical thing, and it's caused because TensorFlow.js is based on WebGL, and WebGL does not have an automatic garbage collection. This is why if we create a tensor, we have to manually dispose it using the tensor.dispose command. But that's hard and error-prone, and this is why we have the.tidy wrapper. The.tidy wrapper will make sure that all the tensors that are created during this computation would be simply disposed automatically. So eventually, we want to run everything. The first thing that we do is to initiate the environment with this randomly selected state. Next, everything that we are doing inside this while loop represents everything that happens during the episode. So the episode is defined by these max steps per episode, the maximum steps that the agent can do during an episode. The first thing that we do in an episode is to choose the action. So the selected action is dependent on the state and the value of the epsilon. Now, because it depends on using our neural network and depends on the value of the epsilon, this selected action can be randomly selected or that it will be derived from the predictions of the Q values. Later, we perform that action on the environment and receives back an indication of whether we have reached a determining state or not. Then we want to perceive the feedback from the environment, so we compute the reward using the rewarding mechanism that we just saw, and that reward is dependent on the position of the agent. We also get from the environment the next state, the state that we have moved to upon performing that action. Next, we want the agent to remember this incident, remember everything that happened, so we could use those memories to train the network later. So we push into memory, which is just a list of records, this record that describes what happens. So it consists of the state, the action that we performed, the reward that we got, and the next state that we moved to. We also update the value of the epsilon accordingly, because we want to decay the epsilon exponentially. Finally, if the episode is over or if the agent has reached a determining state, we break the loop and continue to the next thing, which is replaying the memories. So let's focus on how we replay the memories of the agent. So the first thing that we do is to sample the memory, and we sample a batch from this memory. Now, it is recommended to do this sampling randomly across all the memories from all the episodes that we've been to. The reason we don't take a batch of memories, which is sequential, is because usually there is some correlation between those sequential records. And if we want to break this correlation, we need to select those memories randomly. And that improves the quality of the learning. So once we have this batch, we use our network to predict the Q values of the states and the next states that are described in those memories. Now, our goal is to arrange those X and Y arrays where X represents all the states and Y represents the mapping between actions that are allowable in those states and two Q values that we sample. So now comes the sampling part. So recall, we want our agent to use the memories that it has to compute the corresponding Q values and then continue by wrapping all this data and send it to the network for training. So we iterate over those records, and in each iteration, we use this Bellman equation that we saw earlier to compute the Q value of the action that was preformed during that time. Once we have computed the Q value, we arranged the data accordingly, so we could finally, in the last line, use the dot train call to train our network. So again, the network receives this array of states, and for each state that is described in X array, we have a corresponding mapping in the Y array between actions and their Q values. Okay, so that's all for now, and I hope you enjoyed it. Now we have a couple of minutes for questions.
Reinforcement learning learns complex processes by experimenting with its environment. In this session, you will get a glimpse into Q-Learning and Neural Networks, and how they can be implemented in JavaScript using TensorFlow.js library. As an example, we will show & discuss an implementation which solves the well-known Mountain Car problem.
10.5446/53581 (DOI)
Hello everyone. Thank you all for coming to this talk. My name is Nishant Shavasthu and I'm an Android engineer working here in Berlin. Today I'll be talking about compose for desktop. But even before we start jumping into compose for desktop, maybe the question is where did it all start from? What's the actual origin of this? Well, as most of you who are probably working in the mobile domain would know about, there's a new technology that Google came up with and this one is called Jetpack Compose. Basically Google wanted to do something different with the UI framework. So they started exploring something else that they could have worked on. So what is Jetpack Compose? Well, Jetpack Compose is a modern toolkit that is heavily inspired by what React.js was doing or React as a framework was doing. And then you have also seen something similar in Flutter framework if you have had a look at that. It's mostly a declarative and reactive UI framework used to simplify the UI development. And best of all, it's basically using Kotlin language. It's completely built around that. So you have almost all the functionalities that you would get from Kotlin language, declarative and providing more functionality in terms of how you write this UI in a simpler and much more concise manner. It's all possible because you're using Kotlin language. The important point though is that it's currently in alpha stage. So as in, I think it's in alpha 10 right now. It's not reached the complete stability. But we would probably see that sooner, I would say maybe by this year end or something, you might see some beta version of it come out. So how do you even start working with Jetpack Compose? So if you are using Android Studio, what you do is that you basically just go ahead and start a new project in that in Android Studio also something that I almost forgot to mention is that you need to use the Canary Bill, which is the bleeding edge of Android Studio. Once you have that and you start a new project, you have the option of choosing empty compose activity. What this will do is that it will kind of set up all your dependencies and everything that you need. And it will set up a bare bone Android project, which you could compile to an emulator. So what is the core part of this Jetpack Compose? Well, Jetpack Compose in general is going to allow you to have access to some declarative UI widgets that you could basically just mention and you don't have to write XML code or anything. What this means is that you kind of create composable functions. The way you write it down is that you create a standard function as you would do in Kotlin. And then you annotate it with a composable annotation. Inside it, you use specific declarations that you would use to define a certain widget. So for example, earlier in XML, you would be doing something called text view, but now you just do text and then you pass in a text value to it. So it's a bit different from what you would do, but all of your code is now in Kotlin. You're writing a UI code also in Kotlin. If you had to use this, you would in your main activity, go ahead and set the content to this composable function. And what this will do is that you're basically saying instead of setting the content of an XML file, now you're saying my composable function is what will draw the UI for me. And that's where the content exists. So when you run this application, you would see a blank white screen, which has Hello World text written on the top left corner. So that you can do, that's one way of doing it. But there's also a different functionality that Jetpack Compose comes up with. And this one is called the preview functionality. Basically, what it allows you to do is that you can in your Android Studio, in your ID itself, you can see the preview of the widget that you have defined, the composable function that you have defined without even compiling or building anything. It does the rendering for you inside the ID. So the way that you do it is that again, you create another function. This is separate from what your actual composable function was. And you create this one called some sort of a preview function. So we can define it as fun default preview. And then we say that there's an application theme to it, so that we have something like a setup color scheme for it. And then you pass the actual composable function to it. Notice there is an extra annotation on top of this. And this one is called add the rate preview. And basically saying that you can pass some more arguments to it by saying show background true, false, or some other arguments that you would want to pass. But basically it's defining how your preview is going to look like. But that's Jetpack Compose. The important point up till now is that everything that we have been discussing is something that Google has been building for some time. But what needs to be made clear is that Jetpack Compose is not completely a UI framework. Somehow the naming is so hard and things are like so difficult to name nowadays that you end up like this. But basically Jetpack Compose is a Compose compiler plugin. And then there is a Compose UI framework on top of it. So in general, there's like a common framework that we can think of, which is kind of doing all these transformations. So what does this Compose compiler plugin is doing? Well, Compose compiler plugin in general, what it's trying to do is it's trying to do incremental changes to your view hierarchy. Like once your view is drawn, it's going to make some sort of like calculations and it will try to like figure out when a certain state changes and what one particular part is going to change. It's going to do incremental changes to your UI. Versus what was earlier is that if you made a change in your UI in say an XML based system, you will have to reload the whole or redraw the whole UI again. But that's not what's happening here. Compose is kind of doing some sort of like diffing is trying to calculate the change and only make that small change of it. But that's the compiler's work. It's kind of like going to figure things out and it's going to write specific code for you on top of which there's a Jetpack Compose, which in my case, I would say that it's very specific to Android. And this is what ID is what Jetpack Compose is split into two different parts that there's a Compose compiler plugin and then there's a Jetpack Compose for Android. Now here comes the different section here. So the section here is that because the Compose compiler plugin is doing most of the work, where in the UI widgets are sitting on the Jetpack Compose side, what it means is that right now it's targeting Android, right? You can draw some UI on the Android system. But there's definitely a possible that you could use this compiler plugin to build something else for a different target. In this case, you could actually have gone to build for Compose for desktop, which is totally possible. So this is how it looks like. There's a Compose compiler plugin, which is common between Android and Compose for desktop as what we are going to call. And what it's going to do is it's going to do most of the calculation, the algorithmic, the diffing and all that change detection, state management and everything. It's going to do inside itself. But how this gets targeted on a certain platform, for example, Android and desktop is going to be decided by this specific framework. That brings us to Compose for desktop, which is the main topic of this presentation. So Compose for desktop is everything that you actually saw that I mentioned for Jetpack Compose for Android has. But it's also a simplified desktop UI development. So it's kind of trying to take what people learn while building it for Android. They're trying to bring this down to desktop UI development. For now, it only targets JVM. The lead developer for this, I think I read somewhere mentioned that this is the reason it's doing that right now is because it's part of the plan of how the roadmap would go for JetBrains, who is basically building all of this Compose for desktop. And they wanted to do this specifically because all their products are based on JVM. Their IDs are based on JVM. But that's not a limitation. The point is that the focus right now is JVM for now. The first public milestone was released in November 2020. And this also is in currently in alpha. Point to note though, is that it is also called Jetpack Compose for desktop. As I said, naming is hard. So it's going to be called like you if you if you read through the internet, people would call it Jetpack Compose for desktop Compose for desktop. In this presentation, we're going to try to separate the two Jetpack Compose for Android and call the other one as Compose for desktop. So if you had to actually work with this and you wanted to build a project that could leverage the Compose for desktop, it's quite easy. You just download the latest table ID up plug in the ID ID. And in that specifically, when you start a new project, you have Kotlin section on the left side. And in that you have a desktop option Jetpack Compose for desktop, which is experimental right now. And you basically just go through the wizard and compile it. Important point to note though, is that you need to have JDK11 to build this project. So you need to have the project JDK set to some sort of like in my case, it is set up to adopt open JDK11. You could use some other open JDK2. Now the function in general looks the same exactly how we did in the last time for Jetpack for Android, you will see that says function Hello World text, and then has an composable annotation on top of it. But did you notice something here? So there in the imports, there's something called Android X dot compose dot material dot text and write X dot compose dot runtime dot compose. Again, naming is hard. The reason it is called Android X, even though we are building for desktop is that this actually originated while building for Android. So Jetpack Compose for Android came first and then the target was expanded to say that this compose functionality should also allow us to build UI for desktop. So this is why Android is kind of like being split into the common code away by now. That's why they are able to build it for compose for desktop as well as for Android. But the the naming is still there. So the I don't know if it is on the roadmap that it would be changed in the future. But for now, this is this is not problematic. Basically, it is not going to affect anything. It's just a package naming in which it just says Android X, but you're building for desktop. We'll see that. So you do that. And then to run this whole code, this composable function that you just built, just create a basic main function. And then you define a window because it's not Android anymore. You don't set content anymore, you just define a window and then you push the content on it, which is your Hello, what composable function. Now, in this particular case, also when you are defining windows, you will see there's another Android X as the package name in the import. But it's very simple. You build and run. And then this is what you get the preview as you get a standard window and then the Hello world text written there. One thing to note here is that there is no preview setup that I could find out when you are building for compose for desktop. In Android, you definitely had that annotation for preview, but there's nothing set up in at least for ID idea ID. So you end up in this situation, you basically have to build the project every single time. So what this means is that you can write any composable function, you can create a list, you can add images, you can do media, there's a lot of different functionality that's out there and it's kind of like fully functional that you can actually take the code that you built for Android. And then you just have to put it over here, all the composable functions here. But instead of setting the content, you pass it inside a window, then your UI application would still compile and it will display on a specific based on whatever platform you're using. So Linux, Mac, Windows, it would basically show it inside a window and things would work out of the box. When you actually create this new project inside the build.gradle.kts file, there's something very specific that I want to show you here. And that is that there's a bunch of code there. But the important part is what I've highlighted here. So you have something under the application where you define the main class, which is your main KT file. Basically, it's called main.kt file. And then the class Java class version is main KT class file. So in that you have something called native distribution. Now this is where you actually define what are the target distributions that you can create out of this. So you write this your code in Kotlin. Everything is there. You have composable functions. But where do you want this to run? You want this to run on say macOS, then you target format DMG. If you want this to run on Windows, you target for format to MSI. You want to run this on Linux, you just start for format to.dev. And then obviously it needs to name another partition. So the section that I'm actually highlighting, if you actually try to look into the source code for target format class itself, it shows what are the different platforms that can actually target and in what format it can create the bindries for. So you can see for Linux, it can create DAB, the depth format, the Debian format, the RPM for macos, it can do DMG and package format. For Windows, it can also do exe and MSI both. And if you actually wanted to generate them, the way that you do it is that you just go to terminal and you say.gradleU and then package task. The bindries are generated at your project name, slash build, slash compose, slash bindries, slash main, and then the package type. So in this case, it could be say a DEB in mac version, it would say DMG. But if you're not used to jumping on terminal as much, there's also option inside the ID. If you go into the Gradle panel under the compose desktop group, you can find the package, package, package and MSI. Now these are defined based on what target format you already set up. And what this basically means is that once you have the application ready, you can kind of display or you can generate the bindries right away in whatever version you want. But the question is why do you, why do we actually want to do all of this? Like why, why, why suddenly compose for desktop? Well, the point is that traditionally, a desktop UI frameworks, at least on Java or the JVM were being built with swing and AWT. And they are all callback driven. It's not so good. You'd like go through everything and then you kind of like have to generate a callback and then make an update to the whole UI, maybe read or everything. And compose for desktop is kind of like, is a step forward. It's trying to do things in a much clean manner. It's kind of, it allows you to do some sort of like asynchronous and incremental changes to your UI versus of like redrawing everything or like maybe doing excessive complex calculations, which you don't need to. The other problems are that we have already been handling and working with Electron, QT, GTK, and there are many, so many frameworks, right? And then for each of these, you have to learn a different language. But at the end, if you are proficient in building say, Android applications UI, and you use Kotlin, and then there's the possibility of building the same UI in your desktop, you would definitely want to use that, right? So that's, that's why this thing has been explored. Most of the functionality actually lives inside the common code base. And Kotlin, now that it has the Kotlin multi-platform implementation in place, and then there's like this, a lot of functionality is now being put into, in a way that it can be targeted in different platforms. Composed for desktop was a natural evaluation to happen, evolution to happen. So what's the magic? Well, the magic is mostly Skiya. So Skiya is like a 2D rendering library. It's super old, like as in it's been couple of years that it's been around. It's been used in Mozilla, Chrome, Flutter. That's how I think most people know about this. And then in a lot of different places that you can see it. So Skiya is kind of the de facto 2D rendering library. It's open source in the BSD, so anyone can use it. It's mostly maintained by Google. And the way composed for desktop is actually using this is that it's using Skiya, or you can say Skiya, where it stands for the Java bindings for Skiya, which is maintained by JetBrain, by the way, to go and triggers those changes in the Skiya library when they are trying to show it or to draw something on the desktop UI for this. So we mentioned that Composed for desktop basically works only on JVM right now. Now the reason that is because this Skiya, the Java bindings for Skiya are maintained by JetBrain and to build it for some other native version, so say imagine you wanted to build it for Mac OS or say some other version, in that sense you need to have some sort of bindings to Skiya to draw on that particular native platform. So if there is some sort of a binding for that, there's definitely a possibility of generating the same UI onto a native platform, which is not JVM. JVM right now is kind of allowing us to have this portability wherever JVM is, you can run this code and you have the Java bindings for Skiya. But rather than that, if you wanted to target a native platform, you could just use a binding for Skiya and you could just go ahead and use Composed for that. So what's next? Well, the first thing that needs to be happened right now is that the Jetpack Composed for Android needs to hit stable. That's the idea and once that happens, I think Composed for desktop would have more surface area to cover and become stable much faster. Right now, most of the engineering is going into the Jetpack Composed for Android and also stabilizing the API. And once that has happened, I think it will carry forward onto Composed for desktop. There is some sort of J unit testing that you could use for testing your business logic. But the idea is that there would be some more testing setup, something like screenshot testing or mocking of your UI. But that's something that has yet to be explored. It is not platform dependent, but rather more of a possibility if this will ever exist in the future or not. In theory, this should also work on the Grad VM, which has like some extra, I would say the startup times reduces when you're trying to load up your applications. So Grad VM is also something that can be looked into when you're trying to Composed for desktop. And also as I mentioned, you can target native platforms. And once you have more bindings, available for Skiya. What you should do right now, or maybe whenever you get the time for this is that you need to go to Jetpack, the Composed JV repository on GitHub and try out the examples. I did not cover big examples in this session itself because I thought that this is a huge topic building UI in general, but rather the introduction to how Jetpack Compose is kind of doing things internally and working with different things would make more sense. But they are definitely good examples that you should go out and try and look at the code base, how things are working. And these are some links, maybe you should use them. And I believe that is all. Thank you. If you have any questions, please go ahead. And we are live. Thank you very much for the talk, Nisjant. It was a pleasure. And thank you for being with us today for the live Q&A. Thank you for having me. This was super nice. So we have some most upvoted question. If anyone has more questions, feel free to drop them in the Kotlin Dev Room channel or you can jump over in the private room afterwards. So let's kick on with the first. It's from Russell Wolff and he asks, do you know any good samples for working with a Compose compiler plugin without a UI? Yeah. Well, typically there aren't any public examples right now, but there are people, those who have been working, for example, I think Jake Wharton kind of tweeted about this sometime back where he said he is going to make some of these projects public. I haven't made any of those myself. But if you understand the whole logic of what I was talking about in the presentation, the compiler plugin is basically something that's just doing kind of like T-shaking. It's basically taking all your nodes and allowing you to do incremental updates. So if you have this kind of an example or something where you would want to do something similar, that's where you would just want to plug this in. The rest part, the UI part, the one where your Ski Engine comes in is the part that is drawing things for you. That's why if you just skip that part, you basically have a functional project for that. So in my experience, something similar would be where you probably just want to maybe, like if you have a list of items that you basically are trying to update at a certain point, but you just want to update the only subset of that, you could use Compiler plugin from Jetpack Compose. The Compose compiler plugin. Awesome. Then we have another question that's for me. And I wanted to ask you if you actually tried Compose Desktop with a pet project, what it is about? If it's even open source, what was like a nice example of usage? So I have an open source web application, which I kind of promote all the time and is also open source. This one is called app privacy policy generator. Basically, that pet project is being used by people all over the world. And it's a web-based application. So it uses Boojias and it uses standard HTML JavaScript just to create a template of your privacy policy. So what I wanted to do was I wanted to bring this project down to the desktop version. So I started designing this whole thing in my, say, Jetpack Compose version and then putting the whole thing into a Compose desktop version. So it kind of works. It is there, almost there. It's not open source yet, but that's part of the roadmap. Hopefully what you will see is that this pet project that right now lives in the web app, there would be people who would be able to download this on your local and then you just don't need to be online to generate your templates. Also, another thing that allows me to do this when it goes into a desktop version is that I could do a more logic, which I could not do in the web version as much because you have access to your file system. You have a lot of different things that you can work with. So what this will allow us and in general that project to do is that it will start using more templates and you could just feed in and generate your version of privacy policy. So a lot of extensibility is possible once you start using Compose desktop. Awesome. Then I have another question. It's related to your past experience. So have you experienced desktop development with other frameworks in the past and how it compares in terms of developer API? So was it like easy to use? Was there like something missing? Like what was the experience overall? So one thing that I can say, so I've mostly been doing Android development. So there's like obviously a big jump from going from XML layouts now to a more fluent API that you can build with the Kotlin. So this is something that's a good thing. But I've also developed application using Swig, which is the Java framework allowing you to do UI stuff. And that was pretty hard to work with. There's always so many gotchas that you had to work with. And it's also not like super, I would say developer friendly, which as you start moving into something that's more fluent and it's more like Kotlinized version, I would say, because we use Kotlin so much. Compose desktop kind of just brings us in a different land. We started, at least when I started designing my pet project, I was like, I just don't have to remember everything because everything is first of all the IDs supporting all of the stuff. And when I'm writing this, it's actually readable. That's the best part. The whole code that you're writing for your UI is something that you can read through and you understand exactly how it's going to represent itself. So I think it's going in a good direction, but we should all keep in mind that right now, Compose for desktop is in its milestone stage. It's not like completely in the stable. Hopefully it becomes stable. So there are some small quirks here and there. But there are people who start talking about this, but it is more in the side of like, say, try data set frameworks like Stoic and JWT and other, not JWT, I'm saying. But basically, yeah, all of these Stoic based frameworks are more stable, but eventually when it goes stable, I think more people would start using Compose for desktop. Awesome. Yeah, we still have a couple of minutes. So if there are other question, yeah, there is one. So the question is, are there any native bindings or was that just hypothetical? Oh, so yes. So I think I did mention this in the slides that we had, but basically, so Compose for desktop works, the way that it works is that you have the compiler plugin, and then everything on top of that is the Skiya engine that is working, but it's not directly talking to Skiya. So there is a well maintained Java, Java, which Intel J, sorry, not Intel J, JetBrains maintains, and that's how they are able to compile for the Skiya engine using Compose for desktop compiler plugin. Now, if you, that's one of the reason that it actually only works on JVM right now because the Java bindings exist for Skiya right now. But if you did want to do go and do something like a Mac OS version, then you need to have some sort of binding for Skiya engine. And that doesn't exist. At least no one is maintaining that or no one has created one. What's that? And I think we lost Nishant or we had problems with his connection.
Developing for multi-platform is picking up speed as the Kotlin Multi-Platform gets better with every release. That mostly means that shared logic is being written in Kotlin which can then be targeted to many target platforms. Up until recently it wasn't easy to develop UI for multiple platforms on the Desktop side. That is changing with the introduction of Compose for Desktop, which will allow building application UI for Linux, macOS and Windows. In this session you will get to understand what is Compose for Desktop, how does it work and how can you jump right into building for multiple platforms opening up the vast domains other than the mobile.
10.5446/53584 (DOI)
Hello everyone. Before we start this session, let us all meet Julie. So Julie is an Android engineer working at XYZ company and she's working on the ABC retail mobile application and it's been three four years she has been working on this application. So recently Julie has done many enhancements in the application. She has started using Kotlin and she has started using Android Kotlin extensions. So she's using Kotlin extensions for Kotlin synthetics and Kotlin parcelize. Synthetics as you know help us find or refer a view and they are an alternative to find view by ID and parcelize is used for creating parcelables thereby avoiding lots of boilerplate code which we would have otherwise written for creating a parcelable. So now comes a time for change. There comes a big twist in the tail. Kotlin extensions is no more. After hearing this, Julie gets worried. She goes to Google and asks them that I have so much of Kotlin extensions in my code. What should I do now? Are you introducing something new? How to migrate to that? To answer to all these questions, here enters the hero of the scene, Viewbinding. So hello everyone, I'm Monica Kumar Jethani and I welcome you all to this session on Goodbye Kotlin Extensions, welcome Viewbinding. Before we dive into the presentation, I would like to tell about myself. I'm an Android engineer at Walmart Global Tech India. I'm from India. I'm a chef, reader, writer, speaker. So I like to code as well as recipes. And my Twitter handle is at Monica Anders Koshethani and you can reach out to me on Twitter or LinkedIn if you have any questions about Kotlin extensions or Viewbinding after this session. So all those Android devs out here like Julie, we all have seen the transition through different libraries for just finding and referring views. We had findView by ID at first. After that, we moved on to Butterknife. After that came data binding. After that came Kotlin Synthetics and now this new person into the scene, which is Viewbinding. So all of these libraries have been evaluated in terms of three metrics, elegance, compile time, safety, and build speed impact. So there is one or the other pros and cons present in each of the libraries and that is why the new library was brought into the scene. So let us talk about why this change has come. What was the problem with Kotlin Synthetics? Why has this come up? So this has, this change has come up because Kotlin Synthetics, they can be used with Kotlin only. So if in your app code base, you have Java code, you cannot use Kotlin, you cannot use Synthetics there to find or refer views. Synthetics also pollute the global namespace. So if suppose in your app, you have two fragments like list fragment and detail fragments. In both the fragments, if you are displaying movies related data, like in list fragment, you are displaying the list of movies. In detail fragment, you are showing details about those movies. So if in both fragments, you have movie name getting displayed and if by chance in both of the XML layouts, you have given the ID movie underscore names, movie underscore name to each of the text view in both the layouts. In that case, when you are referring to movie underscore name in your Java or in your Kotlin files, in that case, you will have to import the correct synthetic Synthetic import. So there you will have to pay a lot of attention like which Synthetic import do you want to include in your file. If you include the wrong import, then your app will crash and you'll get to know about it at runtime. This thing, you won't be able to get to know about this thing at compile time. So third thing which is a continuation of point number two is that Synthetics are not at all good when it comes to compile time safety. So by compile time safety, I'm referring to both null safety and type safety. So as I mentioned in the example that if suppose you are in the details fragment and in the Kotlin file, you are using movie underscore name, then you need to, then it will tell you like import the correct import like whether you want to have the synthetic version of detail fragment or list fragment for that particular view ID. So if you import the wrong import, it will, because that particular view won't be there, it will be null, so your app will crash. And similarly in with Synthetics, we don't get assurance of type safety. So now what is the change that has been proposed? The change that's been proposed is we should be using view binding in place of Kotlin Synthetics for finding and referring our views. And we can continue using partialize, but we shouldn't use the Android Kotlin extensions plugin because starting Kotlin version 1.4.2 starting November 2020, the Kotlin extensions plugin, plugins deprecation period has started. And in September 2021, it will completely be removed from the next release. So if you want to use partialize, you can continue using it, but you cannot use the Kotlin extensions plugin, you need to use the Kotlin partialize plugin. So now moving on to what is view binding? So view binding actually helps developers to find their views. And it does so by creating a binding class. And in that binding class, all the views in XML are mentioned as properties and all only those views which have an ID in XML. So view binding is enabled on a module by module basis. And if you enable it for a particular module, it will generate binding class for each of the XML files in that particular module. So if you want to ignore view binding on a particular XML file, you can do that. If you are enabling view binding for each and every module and your app has a lot of modules, then you can enable view binding in your projects, build.gradle file. So now moving on to the benefits of view binding, like view binding has come into picture, we all know about it, but what are the benefits which it offers or synthetics and why it has come? So as I told you earlier, like all these libraries which have been which have been there in this transition phase, they are evaluated in terms of three metrics, elegans, compile time, safety and build speed impact. So view binding performs very well on all the three metrics. So first of all, it can be used with Java code. And it requires no change in the XML layout. So with the help of view binding, we'll be writing lesser boilerplate code also. And you can see it's interoperatable with Java as well. Second thing is when it comes to compile time safety, view binding performs very well, it provides both type safety and null safety. So I'll talk about this in detail, like how it assures us about type and null safety. And because view binding does not use any annotation processing under the hood, so it does not impact build speed. There are libraries like data binding, which offer compile time safety, but it impacts build speed to certain extent, because it uses annotation processing under the hood. This is not the case with view binding. So what developers need to care about when they use view binding? So developers need to attach an ID to every view which they want to refer in their Kotlin or Java files. So if you attach an ID to like every view which you want to refer or use in your Java or KT files, then an entry for that particular view will be made in your binding class. And through view binding, you can access those views in your Java or Kotlin files. If you miss giving the view an ID, then you won't be able to access it in your Java or Kotlin class using view binding. So this is an important thing which developers need to know about. Now for using view binding, we need to use Android Studio 3.6 or above and Gradle version 5.6.1.above. Now moving on to getting started with view binding. So to get started with view binding, we need to do this setting view binding true in our build.gradle file. And as I mentioned, that view binding is enabled for a particular module. If you want to enable it for your project, you put this setting in your project level build.gradle file. Okay, now I mentioned sometime back that view binding generates binding classes for all the XML files, all your layout files, XML layout files in that particular module. So if you want to ignore view binding for a particular XML layout file, in that particular case, in the root element, you need to put this view binding ignore equal to true setting. In that case, the view binding file won't be generated for that particular XML layout. Now let us start migrating. So here I have this small application, which is a basic activity template which has got generated from Android Studio. So this UI of the application is quite simple. Like you can see here, I have, I just have an activity with a fragment and a fab. Moment I click on next, it will open the second fragment. And then with previous, I can come back to the first fragment. So here you can see that there is an activity which has a fab. And I have the synthetic import here. So whenever I'll be migrating from synthetic imports to view binding, the first step which I need to do is after setting view binding is to clean up all the synthetic imports. So first, I'll tell you how you can enable view binding. So to enable view binding, I need to write this build features view binding true. And I will just build my gradle file after that I'll build the project. So here I want to mention like you have your layout files. So the name of the binding class will be your layout file name suffix with binding suffix. So for example, for activity underscore main dot XML, the binding class that will be generated will be activity main binding. For fragment first underscore first, it will be fragment first binding. So now that the project is built, I'll show you the binding files which have so I've enabled this view binding for my app module. So I'll show you like the binding files which have got generated. So here you can see that I have binding files which have got generated for the layout files in particular module. So now coming back here, I will first clean up the synthetic import. And then I'll get this error because the import has been removed. So now I will use view binding in my activity and fragments. And here to the content view, I will pass binding root element. And here I'll be referring to this fab from the binding file. Okay, so here I will, I also want to point out one thing that you are able to refer to this fact because you have given an ID to this fab in your layout file. And you basically to use view binding first we inflate, we get the binding class, then we call the inflate method and inflate the layout. And then we get the root view of the layout using binding root. And after that we set the, we actually pass the root view to the set content view to this pretty layout. And also here you can see that earlier in set content view, we were passing the layout file, but here we are in passing the layout file. So where is the layout file? How does view binding get linked to the layout file? So let's go behind the scenes and have a look at it. So here I have the activity main binding class. So first thing to notice, it implements view binding means it's a view binding class. And then it will keep reference to your root view and all the views which have an ID in your layout. So here you can see the root view's coordinator layout. And then there are only two views which have been given an ID, the toolbar and the fab. If I want the reference of this content main included layout also, I need to give it an ID as well. And you can see here I have those views which are given an ID and this is the loop view. And then with the help of get root, I can get the root view. And then there are overloaded inflate methods. And here you can see that here is the activity layout file. So this is the instead of mentioning the layout in the activity class or the fragment class, it gets mentioned here. It is mentioned here and that is why I can directly use the binding class here for inflating and displaying the UI. So here you can see there is one important method which is static method in activity main binding class which is bind. So here you can see that here we are actually using find view by ID to get our views. So we underneath we are using the find view by ID. And then if it is like view binding is making sure that the view first of all, views of a particular type that is floating action button and then it is making sure that the view is not null. So if it is null, it breaks out. If it is not null, then only we get the instance of this binding class. So if at all if there is something which is missed, if at all like there is some view which is null in that particular case with view binding, we will get to not compile time rather than getting to know at runtime. So supposing if we have layouts for landscape and portrait modes and some views are not present in the other layout, they are null. So in that case, if we attempt to refer those views without any checks in our code in that particular case, with view binding, we'll get to know at compile time. Now moving forward to the our fragment class. So I will use this the second overload of inflate method, which is for fragment. And since I am supposed to return a view from on trade view, so I will return the root view. And now I have this button first. So I'll remove the synthetic import. And then I'll refer to this button first from my binding class. So now, so here in the binding class button first is button first in our camel case and not in not with underscore in between. Okay, so this is done view binding in my activity and fragment. And also fragments outlive the views. So when you're destroying the fragment view, it is now you should be cleaning the binding. So this is how easy it is to migrate from synthetics to view binding. You can see that unlike data binding, where we had changes in our layout file, we used to have layout and data tags is here in layout file. So we don't do any changes in view binding in the layout XML file. So you can see install finished and I have the same app working. So this is how easy it is to have view binding in your projects. So let's move on to the next slide. So moving on to date view binding versus data binding. So view binding you saw that for both view binding and data binding, like we actually find views but data binding is much more like data binding is specifically the purpose of data binding is to bind data from your view model or like from your data source onto your XML. Whereas view binding was not built with that purpose in mind view binding is built for just finding or referring views. So you need to be very clear on the purpose of these libraries. If you just want to find and refer views, you can use view binding. If you want to bind data from your views, bind data from your data source to the views, in that case, you need to use data binding. So coming to the first point, changes required to the layout XML. So in your build.vdl, you need to make the setting for both view binding and data binding. When it comes to layout XML in terms of view binding, you have zero changes to your layout. But in terms of data binding, you need to do some changes in your layout. Then moving to support for layout variables and expressions. View binding does not support layout variables or expressions, but data binding does. With data binding with the help of layout variables and expressions, you can actually assign values to your views. Then coming to third point, which is impact on build speed. So view binding is faster because it does not use annotations. Data binding uses annotations, it is slower. Coming to the fourth point, collaborative usage. Yes, you can use view binding and data binding collaboratively in your projects. You can use both. You can use view binding simply to refer the view and data binding to bind it with some data. So based on your requirement, you can choose which one you want to use. But one thing is clear for sure that we shouldn't be using Kotlin extensions anymore. So summarizing this talk, you can continue to use parcelize using Kotlin parcelize plugin. And you should be removing Kotlin Android extensions plugins from your code base and you should be using view binding for finding or referring to your views. So that was the end of the session. Thank you all for attending it. Feel free to write to me in case of any queries. Thanks a lot. All right. So I think we are live now. So for the first questions, also thank you. Thank you, Monica, for your talk. It was a great talk. So first question, does view binding generate Java or Kotlin classes? It generates Java classes. The bounding classes are Java classes. And in the video also, I mentioned like it extends view binding class. So it is a Java class. All right. So another question, are there any growbacks in using view binding over Vania find view by ID? I don't see any drawbacks at the moment. I see some advantages like it view binding offers compile time safety like both type and null safety for which we had to put additional checks in find view by ID. So at the moment, I don't see any drawbacks in view binding. All right. So this is a question already answer. If you have an Android library, how can I make sure that the view binding generated classes don't click into my public API? I think you're muted. Yeah, I have not tried view binding to this extent right now, like having using it with Android library. But view binding will generate like behind the scenes it works like data binding for the UI. It works for the UI and it generates binding class like data binding. So I don't think it will be like it will pose any sort of leak in the library related classes. It will, I think it will work the same way like data binding, data binding also I have not come across such limitation. So I don't think this will be a risk to the library area of the Android app. But I have not tried this at the moment. Thank you. Also another question for Martin. Can we make a view binding generate cooking classes to avoid multiple compilations or maybe in the future? So view binding on its own generates Java classes. We at the moment cannot make it generate Kotlin classes. I have not tried that. And as far as I see in the documentation given by the Android Google Android team, there is no way by which we can make it generate Kotlin classes. We get the Java classes generated, but view binding is interoperatable with the Java part of the app as well. All right, perfect. I don't see any more questions here. So I think we are all good. If you have any questions feel free to put it in the chat. And if you want to continue the Q&A after you can go to the talk room and continue the chat here.
In this session, I will be talking about the paradigm shift from Kotlin synthetics to View Binding and will be covering the following, 1- Demerits of Kotlin synthetics 2- The road forward 3- What’s the View binding and what are its benefits? 3- Migrating from Kotlin synthetics to View Binding with a code walkthrough
10.5446/53585 (DOI)
Hi, welcome to our talk on how to write your own MVI system and why you shouldn't. My name's Matt and I'm joined by Mickolai. Hello. Combined, we have over five years' experience developing with MVI and in that time we've refined how we approach it. Today we want to show you how to create your own MVI framework in less than 30 lines of code. Let's start with a quick introduction to MVI to ensure we're on the same page and use the same terminology. The best way to really think about MVI is to start with our user. The user interacts with the app, perhaps they click a button, and it's these sorts of actions that turn into intents. The intents are passed to the model and it's the model's job to process these and link up to any network calls or business logic before generating a state and streaming that to the view. The view then renders the state into the UI. As the model may take time to process an intent, it is important that this cycle of data is non-blocking. Now let's take a closer look at the model. It's a model's job to ensure the integrity of the state and also that it is immutable to the outside world. But how is state changed? The intent goes through the transformer which performs business logic such as network calls. The reducer receives the result in the existing state and from that generates a new state which is then streamed to the view. Really the process boils down to three key concepts. Data flows in one direction, protein of tents is non-blocking and the state is immutable. So you want to build your own MVI library. Where do you start? A good first read is the blog post that popularized MVI on Android. Hannes Dorfmann wrote one of the first MVI libraries based on the Redux library for JavaScript. Jay Quarton's talk on the subject is also quite interesting. It presents how you can go about implementing MVI using nothing but RX Java. Garima Jain in her talk describes how to go from MVI using RX Java to MVI using coroutines. Here are some of the better known MVI libraries that you can also take inspiration from. Given there are already several MVI libraries out there, why would you write your own? We feel there's a mismatch between how simple the MVI concept is and how these libraries implemented in practice. We don't think they necessarily deliver on the MVI promise. Most of them have a high learning curve. Some of them overuse inheritance and force your code to behave in a certain way. Some of them have tons of boilerplate. If you're not happy with the compromises that existing libraries have, you might want to decide to write your own. The first thing you're going to have to figure out is how you write your multi-threaded code. While you could use Java threading APIs, RX Java or Kotlin coroutines come with much less complexity. Having implemented MVI with both frameworks and seeing as Google officially endorsed coroutines for Android, we recommend using Kotlin coroutines as a base. Coroutines present a much more readable approach to asynchronous code out of the two and allow you to avoid streams completely for simple operations. To demonstrate how simple it can be with coroutines, we will show you how to create an MVI framework. We will demonstrate loading a list of posts and navigating to the detailed screen when you pick one of them. We have a basic fragment that doesn't really do much currently, but we do have some utility functions that will help us along the way, update list and navigate to details. We've got a view model wired in through Coin but equally could be using Dagger. Let's click into the view model. We can see we just have a post repository wired up and that itself has a get over views function. Where's the best place to start? The state is the most important element of the model. Let's start with that. Let's create the state class, post list state. Now we're going to need a list of over views here. Let's create that. I'm going to initialize that with an empty list and we'll see that this makes some things later on a bit easier. For the streamer state, we need a class that holds the current state and emits it upon subscription. Not only that, but it also should emit only distinct state changes and conflation is also nice to have. Fortunately, recent additions to the coroutines framework called stateflow and mutable stateflow take all of these boxes. To expose the state, we're going to create a mutable stateflow. Mutable stateflow actually needs initial state, so we initialize that to an empty post list state. Since we want the mutability to be hidden, we use stateflow publicly. Here we're using the Kotlin code convention to prefix the backing field with an underscore. Back in the fragment, we now want to actually access that. We're going to call few modeled states and we'll use collect to extract the values. Then we can call our update list function that we've already got. This is a suspending function, so we can use life cycle scope to wrap around that. If we run that, we'll see it's just blank. But how should we approach loading up posts? We need to send an intent to our model. In a traditional Redux style implementation, you would use a stream to deliver intents to your model. Intents are sent as a common supertype and later interpreted by something called a dispatcher to recreate that type information and invoke the correct flow. We don't think this is good enough. Type safety is very important, especially if you have many intents flying around. But who said we have to use a stream for intents? Why not simply invoke public methods directly on our view model? Okay, so let's create a load overview function in the view model. I'm just going to move that below the state. Of course, we can just use the post repository to get our list of overviews. However, guess overviews is a suspending function, but fortunately we can wrap that with view model scope. The beauty of using the view model scope is that when the view model is destroyed, any ongoing network calls will be terminated. What I'm going to do here is actually copy the state and just modify the value that's changed. Yes, we copy to ensure that reductions are done in a cumulative and non-destructive way. This is especially important when the state has many fields that can be updated independently of each other. A slight issue with this implementation though is that many different threads could try and alter the state at the same time. So we need to protect ourselves from that, and one way to do that is to ensure that the state can only ever be modified from a single thread. Okay, so good idea. Now, DragVMatch contains new single thread contacts, so let's just use that to create a single thread. I'm going to create this in a companion object. And now we can wrap our state change with context single thread. This looks great, but we don't want to have to remember to do all of that every time we want to do a reduction. We could improve the reusability of this code by splitting this into a reduced function. Yes, good plan. So let's create a reduced function. Now to actually make things simpler, we can actually use the post-estate as a receiver type for the lambda. And we'll see as we update the code how that improves things. Of course, now we should be able to run the app. And for clicking on these items, you can see it displays a toast. Click on another one. Again, toast is displayed. Now, of course, we don't really want it to display the toast we actually wanted to navigate. Navigation requires context, and the fragment is actually going to perform the navigation. So why should we create a function in the view model? Generally, it's good practice to have a view which makes no business decisions by itself, a so-called dumb view. It's also nice to have all of your business logic in one place so that you can unit test it. Okay, so let's create a function and we'll call this onPostClicks. And we'll provide the post to that. Okay, so let's just create the in the view model. And I'll move that function up. Now, again, we want to do things within the view model scope, but actually what do we put inside this function? So some MVI approaches that we've seen suggest creating an optional field in the state to tell the UI when to navigate. However, navigation is typically a one-off event. So to stop navigation happening multiple times, for example, when you later return to your original screen, you have to ensure the field is reset once the navigation has completed. As you can tell, this can quickly get messy. Instead, a cleaner solution is to introduce the concept of side effects. A side effect is code that does not modify the state of the model, but still does some important logic outside of it. We can represent one-off side effects through a dedicated channel for events that should only be ever handled once. So before creating the channel, let's create an event. So we'll create a navigate details event. And we will use a buffered channel to hold the side effect. Now for the public field, can we just use a receive channel? Actually, if we use a receive channel, someone could consume it, which cancels it after unsubscription, effectively making this channel one-use only. Of course, we don't want that, so it's best to expose it as a flow. Okay, so let's use flow. And actually what we'll see when we look at side effect, there is this receive as flow function, which we can use. And then now here, we're going to do side effects and create our navigator details object. So help improve readability, we can actually break this out into a post side effect function. While we are here, one more thing to tidy up would be cleaning up viewModelScope.launch. We could put this in a function. Yeah, right, so let's create an intent function for this. And we're going to need a lambda. And for the lambda, we'll actually call this transform, because this is really what we're doing. We're doing the transformations within this. So we'll put the viewModelScope in there. And simply call our transform lambda. We can simply update our two functions in here and remove the wrapping of the viewModelScope. So that's certainly a lot neater. Additionally, everything will run on the main dispatcher until you switch context. So coming back to MVI being non-blocking, we really do not want to block the UI. We may want to set the coroutine context to the single thread we've defined previously. Yes, it's very true. So let's use the single thread. Now, of course, going back to the side effects, we need to actually listen to our new stream, so to implement that. I'm going to just copy and paste the original state code. Let's give that a quick run. And of course, we see it now. Okay, so that's as we would hope. Let's go back to the viewModel. This code isn't very reusable. Can we maybe break the framework code out into a separate class? Sure, I guess it's going to contain our state, so we could just call this container. I'll move all our MVI code within that. Now, one thing to notice is viewModelScope isn't found. And actually, the reason for this is because, well, container isn't a viewModel. So we'll need to provide the coroutine scope. Now, we need to actually create an instance of our container. So let's just do that. We'll just create a file container and provide the viewModelScope to that. And then we can update the intents to prefix with container. Now, we'll see actually reduce isn't found. And the reason for this is because of the scope of the lambda. So all we can do is use a receiver type of the container. And we'll just quickly update the fragment to use the container. This already looks much better. But the code's not generic. We still depend on post list types. You're right. So let's introduce some generic type parameters of state and side effects. So now everywhere we see post list states, we're going to replace with states. And everywhere where we see navigate to details, we'll replace that with side effects. Now, of course, one thing that's missing from this is initial states. So we'll actually have to provide that. And of course, the generic types on the receiver type. And then the type parameters into the container instance. And our default state. Let's give that a quick run. And again, we see the navigation still works. And so there we have it, MVI. This is the complete code, for example. Simply MVI. There's no inheritance or boilerplate. And it all fits in less than 30 lines of code. So should you write your own MVI framework? Our recommendation is don't. The example we gave you was simplified. In the real world, an MVI framework needs more supporting features. For example, DSL scoping. Currently, there's nothing stopping you from, for example, nesting reducers. The threading model used in this example is simplified and it's not the most efficient. The framework is not unit tested. And it's not easy to test your view model either. There is no idling resource support for your UI tests. Or, saved state support for surviving process death. So why write your own when we did all of this for you? Orbit, our MVI library, offers an API that's quite similar to our example code. It has all the missing features outlined here and more. And we're not stopping there. We have a lot of nice features on our roadmap. A popular feature in some MVI libraries is time travel debugging. Where you can record all the states emitted from a view model and then rewind and replay them. We're also planning to support screenshot and interaction testing and making library multiplatform. We hope we have demonstrated how easy MVI can be and that you found this talk useful. We welcome you to visit our GitHub page. Thank you for listening.
Model-View-Intent is a simple architectural pattern in principle, but questions come up when you try to implement it yourself. We draw on our 2+ years of experience with orbit-mvi, our MVI library, to show best practices for using an MVI system in your application. How do you integrate with Android? What happens when you rotate your device? What about navigation or one-off events? How do you make the system type-safe? What about developer experience? If you’ve ever had similar questions come to our talk!
10.5446/53586 (DOI)
Hello everyone. My name is Silane Sarkova and I'm a developer advocate at JetBrains and a co-author of two books about Kotlin, Kotlin Action and atomic Kotlin. Today I want to talk about the Kotlin roadmap and focus mainly on the upcoming language features. Kotlin roadmap describes the key areas the Kotlin team is working on right now and is going to focus on in the nearest future. This information is available online and is open for everyone. Time frame for these plans is around half a year. It's not 100% promises, you can think of it as a declaration of intentions. You can find their what's on the radar and see what's going what's good past the point for later time. The majority of the Kotlin team is working not on adding new stuff but on improving their overall experience. That includes speeding up the development process, making change test debug cycle really fast, rewriting the Kotlin compiler. Your compiler is optimized for speed, parallelism and unification. Later we'll also work on plugability. Some parts are already there, like the new type interface algorithm in Kotlin 1.4. Some parts are almost there, like new JVM IRB account. And some parts are in active development like the new frontend logic. Improving the stability and performance of IntelliJ ID and Android Studio is an ongoing process. And we have quite a bunch of improvements here. I want to ask you to try new JVM IRB account. In Kotlin 1.4.30 it's better. It's becoming default in Kotlin 1.5 and we'll be really glad if you could turn it on in your project and make sure everything works. If it doesn't work for your tricky corner case, please immediately share that with us. It's already used by Jackpack Compose and is important for its stabilization. Everyone knows that Kotlin is great for Android development. We find it also important to spread the knowledge about using Kotlin for server-side development, mainly with such frameworks as Spring and Cater. And of course we are working on Kotlin Multiplug for mobile solution that allows sharing code between Android and iOS. And I'm really happy to see that both these topics are extensively covered in the agenda today. We started to collaborate more actively with the community and gather and publish your Kotlin stories. You can find them on our website and also share your story. It's often more important to listen to what other people say about their experience with Kotlin than listen to our own pricing speeches. In the rest of this talk, I want to focus our attention on the upcoming language features. They are mostly connected with improvements in the JVM platform. What are we going to discuss today? First, we'll talk about sealed interfaces and sealed classes improvements. Then we'll recall what inline classes are and we'll explain why they became value classes. These features will be included in 1.5, but you can already try them in 1.4.30 by specifying the 1.5 language version. So let's start. Sealed modifier on the class restricts the class hierarchy to given subclasses. It then allows to perform exhaustive VAN checks. No need to add the else branch. The compiler automatically checks that all the classes are present. If you add a new subclass, such VAN check becomes an error, so the compiler makes you update the usages. In Kotlin 1.4, sealed classes come with two sometimes annoying constraints. You can define a sealed interface and all the subclasses should be located in the same file. When your classes logic begins to expand, it's quite inconvenient to keep them all in the same file. Kotlin 1.5 fixes both these constraints. First, sealed interfaces are going to be introduced. Second, all the subclasses should be located in the same package and in the same compilation unit, which means they can be in different files. Sealed interface like a sealed class is useful to define an abstract data type hierarchy. One class now can implement two different sealed interfaces and be used for exhaustive checks for two different hierarchies. Sometimes it can be useful. Another use case when sealed interface becomes really handy is constraining implementation of an interface to library only. Since all the subclasses should be located in the same compilation unit, it automatically means they could only be defined inside this library. For example, if we define an interface job from the Kotlin X proteins, proteins packages sealed, that will forbid extending it outside the library. It was always an intention, but it was impossible to restrict it by the compiler. Sealed classes got supported in Java 15 and on Jovem as a preview feature. In Java, you explicitly list all the subclasses of the given sealed class for interface after the permits keyword. Jovem recognizes sealed classes at runtime. Permitted subclasses of a given class are stored in a new attribute in the class file. The name of this attribute can still be changed. Thanks to this information stored in the class file, Jovem can check whether given subclass is allowed, whether it's listed in the permits list. Jovem therefore forbids not authorized subclasses. In the future, Kotlin will use this new Jovem support for sealed classes and interfaces. It will generate permitted subclasses listed in the bytecode to enable the underlying Jovem checks. In Kotlin, you don't need to specify explicitly the subclasses. The Kotlin can use this information from the declared subclasses in the same package and generate the corresponding list under the hood. If you would like to have the similar explicit list as an optional specification, please share your use cases and the reasons why you missed this functionality. Don't define mixed Kotlin and Java sealed hierarchies. Kotlin compiler won't know about Java set classes. At the moment, Java class can't implement Kotlin sealed class, but it can implement Kotlin sealed interface, so don't do it. In the future, with the new Jovem support, it will be forbidden. Java class will be considered as illegal subclass to Kotlin interface. For now, we'll add IDE warnings to prevent doing it by accident. Next, I want to discuss inline classes. First, I'm going to remind you about this functionality, and then we'll talk about how it changes. Let's start. In simple words, an inline class can wrap a value without any additional overhead. This feature has been available for some time. It was added in experimental state in Kotlin 1.3. Kotlin 1.5 is going to stabilize this functionality with some changes. I want to use time duration API as an example to define a problem and discuss the alternative solutions. Without inline classes, duration can be implemented either via primitives or using a regular class. Let's try primitives first. We need a function that does something after a given timeout. How to pass a timeout parameter? We can use primitives like long. However, it might be really confusing on the call side. We pass an integer constant, but we don't know immediately whether it's seconds or milliseconds or I don't know, minutes. It's not a type safe solution. We could sort of overload this function adding the time units to its name, but it's two verbose. Defining a duration class solves the type safety problem. We can define extension properties like into seconds or into minutes to emphasize the time units. It's no longer prone to use new grid after timeout function. We have explicit units in the code. Cool. The only problem of this approach is that an extra object is allocated to store the duration. After gaining time safety, we lost performance. In line classes, in line value classes, solve it. They combine performance of primitive types and type safety of regular classes. Starting from 1.5, you define an inline class differently, as a value class annotated with the GVM inline notation. But the concept is the same. Under the hood, the compiler replaces the duration parameter with long. That means no extra object is allocated when you pass a value. In this example, the compiler places two dot seconds with a corresponding constant in the bytecode. That's what I mean by saying that inline value classes combine performance of primitive types and type safety of regular classes. No extra object allocated. It's primitives under the hood. An explicit units in the code demonstrate the type safety. Duration is represented by a separate type. It's not any number. Note that the duration class has been available in the Kotlin-Strand library for some time in experimental state. But it's different from what I'm showing here since it stores the values double property, not long. I used a simplified version in the presentation for clarity. Inline class is a wrapper of only one property, and this property should be read only. That's what changes before defining a mutable property was allowed. And inline class can be a wrapper either for a primitive or for a new reference type like string. The wrapper is not always eliminated in the bytecode. It happens only when possible. It works very similarly to build in primitive types. When you define a variable or pass it directly into a function, its type gets replaced with the underlying value. Here, during the compilation time, it's duration, but it's replaced with primitive in the bytecode. If you store a duration value in a collection, or pass it to a generic function, however, it gets boxed. Boxing and unboxing is made automatically by the compiler. So you don't need to think about it, but it's useful to understand how it works. If a function takes an inline class as a parameter, its name is mangled. That means the compiler adds a suffix to its initial name, like for grid after timeout of this example. That happens for two reasons. First, that allows the reloading a function to take two different inline classes that wrap the same value. Like in this example, we have two different record functions, taking name and password as parameters. Without mangling, they would have the same givm signature in the bytecode, and such code won't compile. The second reason for mangling is to refrain its accidental usage from job. Kotlin users are type safe. You can only pass a correct type. But if you use it from Java, you can have the same confusion problems as with using primitive types. If you want to use such function from Java, there is a walkaround. You can provide an explicit givm name for this function. This annotation changes the underlying name in the bytecode, and makes it usable from Java. If you follow the language changes and were using inline classes before, you might be surprised why this syntax changes. It was inline class before, but now it's value class, annotated with givm inline. Why? Inline classes now become a part of the biggest story. The case of mangling is that mangling is a very important part of the language. Why? Inline classes now become a part of the biggest story. The case of value class with a specific optimization. So now let's talk about value classes. Disclaimer. Value classes are not yet supported. What I'm going to share now is the future vision and the reason for this syntactic change within line classes. Value classes will be properly added to Kotlin in the future versions. Again, this syntax doesn't yet work, but it's planned to be supported in the future. The idea of a value class is to present an immutable entity with data. It can contain many values, not just one, but all of them should be read only valves. Mutable words aren't allowed. Value class is primarily a data holder without identity. It's completely defined by the data stored. Identity checks are even forbidden for them. And that allows major future optimizations. You've probably already guessed that it's connected with the upcoming Valhalla project. Valhalla project is a big feature change in both Java and Java. Its model is similar to what we've discussed about inline classes. Code like a class works like an int. The goal is to combine the benefits of performance of built-in primitive types and the type safety of the regular classes. And that comes with native Juvenp support. So let's dive into details. I'll use the Kotlin terminology, say value classes, but assume they are implemented as Juvenp primitive classes under the hood. First, let's compare them with regular data classes. I define point as a data class data point to distinguish it from value point from the next slide. When we define two data points, the corresponding objects are created in memory. The variables point1 and point2 store only references to these objects. Kotlin optimizes equality and calls equals under the hood on our behalf, which compares the underlying data. But we can also check the reference equality and compare to references. Since they are different, such x returns false. Values stored in variables are highlighted in green. You can see that they are constants directly for primitives, but references for the reference types. The major upcoming change in Java and Juvenp is that now values of primitive classes, or in Kotlin terminology, value classes, can be stored directly. The value classes are going to be stored in variables on the computations tag and operated on directly without headers and pointers. It's a huge change in Juvenp, and that explains why it takes so long to implement it. Since there are no references, the whole idea of references of reference equality is no longer valid. Let's compare how it passes data points and the value point as an element to a function. In the first case, the data point object is stored in memory, so the compiler puts the reference in stack when it passes the argument. In the second case, the value point gets passed directly. Now this object becomes data itself. Note that since there are no memory compile, there is no state and value class can't be mutable. One of the main reasons to support this new concept on Juvenp is to enable flat and dense layout of objects in memory. Area of reference types is always an area of references. That leads to the practice of defining different areas of primitive values to store both coordinates, x and y separately in a performance critical process code. Project Valhalla addresses that with introducing primitive classes. Juvenp will be able to optimize such storage for primitive classes, again in Kotlin terminology, and flat on that. Does it sound cool? You can ask when? When Project Valhalla lands on Juvenp. As you remember, I promised you to share the future vision and the motivation of why inline classes are now valid classes. We are really anticipating this upcoming Juvenp change and want to benefit from it in Kotlin. But before that, we don't want to block our users from benefiting from new functionality. Inline classes do work now and they'll be stable in Kotlin 1.5. Note that Juvenp inline annotation and the whole primitive classes optimization story are Juvenp specific. On other back ends, the underlying implementation is going to be different. For example, analog of Swift structs for Kotlin native. There is more to Valhalla classes and that also explains why we do want to have this functionality even before the major div M changes. Modification members is a concept already working in Swift and that works perfectly in theory with Kotlin immutable valid classes. Unfortunately, that's out of scope of this talk we've already used up our time. I want to redirect those of you who are interested to the talk by Roman Elizarev from the Kotlin 1.4 online event. We've discussed the new upcoming features. They are available in Kotlin 1.4.30, so please give them a try. Your feedback is really important to us. If you want to learn more details about what I've covered today, first of all, check the detailed design documents in KEEP. They provide the general description for the new features as well as implementation details. We are also publishing this information in our blog and in our recommendation. You can join the discussion around these features and other issues on the Kotlin roadmap. Check the roadmap to find the U-Track issue links. Your feedback is really important to us, especially for the pre-release features. You can influence the Kotlin evolution by trying new features and sharing your feedback in your use cases. Thank you for your attention and let's Kotlin and I hope you'll enjoy the rest of the conference today. Awesome. We should be live with the Q&A. I want to thank you again, Svetlana, for being with us. Awesome. We have some questions from the audience. If you have more, feel free to keep on dropping them in the Kotlin Devroom Matrix channel. We can keep on answering them for some minutes. The first one is will libraries compile with a Dash language version 1.5 flag when Kotlin 1.5 will be released? Or should we expect making changes so that we need to release a new version of the library again? Yes. This is a prior state. Right now, adding language version 1.5 is equivalent to a milestone 0 of 1.5 release. All the milestones mean that it's a prior list version and you need to recompile everything when Rc and then the release version appears. The answer is no. Awesome. Still a nice tool to keep on trying. Yes. We really want everyone to try them to give us feedback because this is the time when this feedback can change things because we still have time for fixing something if it doesn't work properly for your specific use case. I want to ask everyone, please, try all this stuff and check how it works for use cases. Please also share that with us immediately. We're really interested in this. Awesome. Then we have another question and it's why value class was picked instead of having a val class? Actually, to avoid confusion because otherwise you will have this one keyword used. Some people may consider it as a very similar functionality, but it's still different things in terms of one is a class which presents specific concepts of its value classes and others just defining property. Yes, they're similar, but to avoid confusion especially for newcomers, for folks who are not deep into, I don't know, philosophy of these things and just want to use it for the use cases. The main reason was to tell what is that sort of confusion. Your eyes at some point there were indeed called value classes in the previous versions of Keebs and documents. Awesome. We have another question from Martin. He asks if value classes will be supported on older JVM without Valalla and if supported, can we expect them to be faster? Yes and no. The plan is to indeed support value classes on older versions as well as other cotton back ends other than JVM. However, the whole point of Valalla is that it makes it fast. It makes it possible to implement all these things fast. We can only implement these value classes on older JVMs as regular wrapper classes, as regular box classes. There will be not that much difference. However, if you now Romani Lazarov wrote a very long document about value classes and other things that it happened after I recorded this talk. I think the link was already provided in the chat. You can read it and see how much more functionality gives in terms of modification methods, in terms of supporting mutability in the language. All these things, all these tactic things can be supported even for the previous versions, however, without these performance benefits that Valalla brings. Okay, so we do have a lot of other questions that keeps on flowing in. So I will read them through. The first one is, can we make a data class a value class? I think that no, so it's kind of you need to choose what might be wrong here. So I because the thing is that data class allows more functionality in terms of you can define new dating methods and in value classes, you can define mutable properties, I'm sorry, but value classes for these mutable properties. So if I'm not mistaken, it's like you have to choose. So if you have a data class, you have to change it. But yeah, for this case, if data class satisfies the constraints, theoretically it might be possible to define it as a value class, but I'm not sure. But again, so far, only this very much constrained syntax is allowed. So it's like so far with 1.5. So all these other things about value classes, they come later. At 1.5, the only thing that is allowed is like, Vail class and with the Vail line and nation, which allows only one property. And this is should be defined not as a data class. And all the other things like whether like how they interact like data classes that actually have to be decided a bit later, at some point. And when this feature got supported, I suppose somewhere in 1.5, it was cycle, but no promises. So we are very careful. But this is the direction how it should evolve. Awesome. So we will also post the link to the value class, keep notes on the on the channel. So if people wants to follow up. So we have a couple about a question and also some minutes to answer them. The first is a follow up on the continuity platform side of things. So what about native and Zwift? Will that be faster, still related to value classes? Yeah, the question is faster than what I don't really understand this question. So in terms of faster with data faster than data classes. Again, I think that for the for the for the backhands, the major thing is not about performance of this value classes ideas not about the farmers, but all these are the niceties. So my talk was mostly focused on covering this job and explaining actual basically defining this okay, we'll have now stable and land classes. And afterwards follow the motivation like why we have this strange, new syntax with development land and well, well class. And for other backhands, the major thing is not performance because they don't have this primitive and cause distinctions as to them one. So it's all different there. And the main benefits of father class for them will be all this immutability and mutating functions, all these cool features described in this design document. So for them, it's mostly not about performance, but about these syntactic features. And they will be implemented as like as a possible for for the corresponding backhands because it's also different for native one for jazz one. Awesome. So we have a question on Java records. So our value classes also link to Java records, our data class is going to be to implement records as well. Yeah, so with the Java records, I'm kind of sorry I skipped this topic from from my talk because I at first I had this covered but I decided that it's more even more niche because it interests those who follow this journey stuff. You can check our last block post it covers Java records. And the idea is that we'll just support them records. We're with Kotlin understanding what are Java records. And also they'll, like, you'll be able to annotate Kotlin class so that it will generate Java records methods in the dash of Java of Java records, notations. But we don't have a plan to replace all data classes to all data classes. So that they implemented records, but you can do it manually if you need it, for some reason. But if you just live in Kotlin environment, all other all the all the old ways work. So it's like it's you don't need to have records probably you can just use data classes. And for those who don't know the major thing that
In this talk, we’ll discuss what the Kotlin team is working on, the priorities we have, and the additions you can expect in the language. The JVM platform is evolving, and Kotlin is keeping up with the new features as they become available. This includes the features introduced by the upcoming Project Valhalla and JVM support for sealed classes and records. In this talk, we’ll discuss how these changes affect Kotlin as a language, and how the Kotlin team finds a balance between drawing on the power of the new JVM versions, supporting the same functionality in older versions, and providing a smooth transition. We'll also talk about how you, the community, can influence the design and evolution of the language!
10.5446/53587 (DOI)
Hello, Faustam. My name is Russell Wolff, and this is lessons I've learned in Kotlin Multiplatform Library development. I need to work on easier-to-say talk titles. Anyway, a couple quick words about me. I'm a developer at TouchLab, where we build apps using Kotlin Multiplatform. I'm also the author of a library called Multiplatform Settings, which we're going to talk some about. So, let's start with a little bit of background on Multiplatform Kotlin and what that is. So, this is the Faustam Dev Room, the Kotlin Dev Room at Faustam. So I'm going to assume that most people are familiar, at least with the use of Kotlin on the JVM, but Kotlin, of course, has multiple platforms that it can build to. So, they're primarily in three groups. There's the JVM, which is Android and server-side stuff. There's JavaScript, which includes web browsers and Node. And there's Kotlin Native, which includes iOS, native desktop, embedded systems, and things like that. And the Multiplatform framework is organized so that you have common code that can compile to more than one of these targets, as well as platform-specific code, which can talk to the APIs of those platforms. Actually, some kind of intermediate stuff that's right between those as well, but we're not going to go into detail about that. So what does this look like in code? So if all you have is kind of pure logic stuff, you can write that in common. So common code has access to all the kind of platform-agnostic parts of the standard library. So things like collections API, but it doesn't have any platform-specific stuff, so it can't do things like talk to file system or sensors or things like that. So when you can't do something in common code, the Multiplatform framework provides these new keywords in the language called expect an actual. So you can have a declaration in your common code with the expect keyword, and it can be a value, a function, a class, or anything else. And you can give it separate actual definitions on the different platforms. As you mentioned, most of my examples here are going to be using Android and iOS, but the same thing is true if you're using any of the other platforms as well. So expect an actual is a nice way to kind of quickly spin something up, but it has some limitations. So there needs to be exactly one actual definition for every expect definition. So if you need to be able to, like, switch out your platform definition in different scenarios, you might need to do something else. So instead, or in addition to expect an actual, you can also define interfaces in common and implement them in your platform code. So you might have a logger interface, for example, that has an Android logger, an iOS logger, and an advantage here, because the limitation doesn't even need to come in Kotlin. So on your iOS side, if you're writing the rest of your application in Swift, you could implement your logger interface from Swift. The other advantage you have something like this is you have the ability to define a test logger or a test-like version of whatever application you have. So a couple of quick overview of what the architecture of a multiple platform app looks like. So the center of your application is going to be this multiple platform module, has common code, and it can build, say, the Android code to Android library, it can build the iOS code to the iOS library, it has JavaScript code, and build that to the JavaScript library. And the orange of the center is the common code, and the outside is the platform-specific stuff, which might have, say, platform-specific implementations of certain things. These get consumed by the apps, which then also have the ability to consume any other platform-specific dependency that they have. So that's the POD and the flexibility of Kotlin Multipotform. Your shared code is just one library dependency among everything else. It can be as much or as little as makes sense for you in your use case. But what happens when you also need dependencies from your common code? So that's what we're going to be focused on when we talk about multipotform libraries, is things that extend the APIs that are available to your common code, such as, for example, different settings, which is the library that I maintain, which is available on GitHub at this URL. And it's a key value store, so you can save and load simple data, giving things different keys to as names. And it operates on kind of a set of simple primitives. And it also has a couple bits of Kotlin Icds, like operators and delegate functions, just to make your Kotlin code nicer, depending on your code style preference. And something about the Zyda library that I haven't really emphasized much when I talked about it previously is it's very focused on platform interop. So one could easily build a library like this by creating kind of a completely custom Kotlin Icd implementation of everything, where you write some custom file format, serialize it to disk, do exactly the same on every platform. And that would work in your common code, but your platform-specific code would never really know anything about it. So what I tried to do as I was building a library is make sure that it was using the same source of truth that you might be using in your bottom-specific code. So for example, so every kind of, like the core of the class is the settings interface, and I'm just kind of showing one function here as an example, so that you can actually see, like, read the text on the slide. So for example here, like the Android settings wraps the shared preferences API, which is likely what you're using to do key-value storage in your Android code. So if you have a key that you've saved in your common code, you can read it out from the bottom-specific code, and they will be synced up. And similarly, an iOS using user defaults or JavaScript using local storage, and there's another number of other implementations as well, including a mock-in-memory implementation. So there's mock settings that just wraps around a map, so you can use that in your application and kind of testing your application code that makes use of the library without needing to use the actual kind of runtime versions of things that are going to actually save data. And in addition to the different implementations, there's little syntax niceties that I mentioned before, so operators, you can get that, like, bracket, get-and-set syntax, and delegates, so that you can define variables that when you get set them are backed by your setting store, and we'll talk a bit more about those later. So with that kind of overview of what the library does, let's talk a bit about what I've been working on with it lately. So the first thing I want to talk about is some of the thoughts I've been having lately about some of the early API choices and things that I might have done slightly differently if I'd known then what I do now. So the initial version of the library, and this is kind of what things look like, has getters that look like this, where there's a key that you pass in, obviously, and there's a default value so that if that key is not present in the store, it'll return the default instead. And there's kind of two main ways that key value APIs in general do that sort of thing, handle missing keys, you can either pass a default value like that or you can just make it nullable. So later on, I added this, I added kind of an or null equivalence to all these different APIs. One kind of subtle thing that I wish I'd done differently is this equals zero here. So the default value argument has itself a default value, which means if you just pass a key in, you still get the non-null version of the function, but when it's missing, you get zero instead of getting null. So it's like a slightly awkward API because I don't think there's a ton of actual use case in retrospect to where you want to just default to zero there without expecting it without specifying it explicitly. So if I didn't have that, then these could both be the same thing, right? The pass and default value and get the non-null, or you don't, you get the nullable. So I've thought about doing, I've been thinking about doing this refactor, though it's a breaking change. So there is an issue on the GitHub if you're interested in giving feedback as to whether or not that's something that I should do. But I wanted to highlight it as an example of something that I did early on without really giving a lot of thought to it that has consequences down the line if you're worried about maintaining compatibility. Another early choice that I've been giving some extra thought to is around naming. So each platform tends to have a settings implementation that's named for that platform. So the different Apple platforms, for example, iOS, Mac OS, TVOS, etc., all have the Apple settings that wraps user defaults. But recently I also added a second implementation on these platforms that use the keychain instead of using the user default API. And now this naming just feels kind of dissonant, right? Why is user defaults the thing that gets the Apple name instead of keychain? Am I saying that you should use one over the other? So another refactor that I'm thinking about doing is changing all of the platform named implementations to name the API that they wrap around. This has the advantage of making the interop a lot more clear. But again, it's a breaking change. So the next thing I want to talk about is some cool integrations I've been working on recently with some of the next libraries. So a long time pain point of Kotlin native in particular has been binary compatibility where every time a new native version came out, it broke compatibility with existing libraries and so you had to recompile every library before you could update your application code. And that meant that it was pretty hard to justify having a library dependency within your own library because you're adding this extra kind of piece that you need to wait on before you can update. But with Kotlin 1.4, the compatibility story, they don't have explicit guarantees around about incompatibility yet on the native side, but things have tended to be a lot more compatible. And so it's easier to publish integrations with the libraries and not have it block your update path. So that has enabled a couple cool things. The first one I want to talk about goes back to those delegate APIs that I mentioned earlier. So the second library provides functions for each of the types to use property delegates. So you can call this interception function, pass the key, and a default value if you want and there's no other versions of these too. So that you can use the property delegate syntax to have a variable in your code which is backed by your setting score on reads and writes. And that's nice for your primitives. But something that I've always kind of like counted back in my head that I thought would be nice to add was a way to add custom delegates. So maybe you have, say, a user class that has a first name and a last name and you want to save it as a user instead of a user to define a key for each of its properties and save those separately. I thought it would be nice to have some kind of API to make that sort of thing easy to write. But I never, or for a while I didn't really have a kind of a good way to do that until I had some conversations with a couple of the Jeopardy steps around the next serialization library. So you might know the serialization library for its ability to serialize things to common formats like JSON or protobuf. But it also has APIs for custom serialization formats. So they provide these classes called encoder and decoder, which you can use to kind of take, they basically kind of provide the glue between the kind of abstract serialized form of a class and any arbitrary format that you might want to define. So we can use our setting store as that format. So essentially you give it the settings and the kind of root key and it's going to go through every member of the class. And so if you have that user with first name and last name, it's going to save user first name and user last name as separate values for you. And the top level API is just another extension function that uses extension functions on settings to encode or decode. And then there's also a property element. So it looks a little bit more complex than our original kind of conception. So the serialization API requires this case serializer argument, which is the thing that tells common serialization kind of what. So it doesn't just kind of like infer the class automatically because you might have some kind of class hierarchy that needs to get serialized in a particular way or you might want to kind of customize how it works. So you have to pass the serializer explicitly and you have the option of passing this context, which is also part of how the, how like polymorphic serialization can work. But you could always kind of wrap these around your own extension function if you wanted to kind of get back to that original API style. So if you define this like, if in your application code you define this, this my class style function, then you can get back to that original syntax that I've been thinking about. So that's pretty cool on the serialization side. The other library that I've spent some time thinking about integrations with is coroutines. So one of the things that the second library provides is for some platforms, certain size limitations are observable, which means they have this add listener function, which can, you can pass it to callback and the callback will get called every time the value at that key changes. So the obvious kind of coroutine extension to add is a, just a flow version of that, right, where instead of an arbitrary callback, you can get a flow and just subscribe to that flow. And that's actually like extremely straightforward, right? So if you asked me a year ago, I would have thought that the only kind of coroutines extension that multi-fabric tagging could ever need would be this. But then the Android team came and added this new library called Datastore. So Datastore is intended as a shared preferences replacement. So as a key value storage library, obviously multi-fabric settings is going to want to have a Datastore based implementation in addition to the shared preferences of the implementation. But that ends up being kind of complicated. So Datastore is a completely flow based API. So you have this Datastore object and it can store any kind of data, but the kind of shared preferences in the analog is this object called the preferences Datastore. And it has a data property on it, which is just the flow of the full preferences state every time it updates. So if you want to get like the value of a particular key, you subscribe to that flow, you map it based on that key and you get a flow of that value. And if you want to write data, you set a function, you put in your edits fairly similar to like the shared preferences.editor API that you have with shared preferences. So what's complicated here is everything in this is code routine based. So like that edit is a spend function, the getters are all flows. So that's all kind of hard to fit into a interface that is not code routine aware like settings. And so what I've ended up doing to have something that works with Datastore is added two different interfaces and we'll talk a bit about why. So there's a suspend settings interface, which looks exactly like settings, but all the functions are spend functions. And then there's a flow settings interface, which extends the sense suspend settings and adds flow getters. And then we can have our Datastore settings, which wraps the Datastore API in the flow settings interface, but could be downcast to ASS as well. So why do we need both of these? So remember, we have set it like in the base library, we have settings and we have observable settings and you need observable settings to be able to set listeners and you need to be able to set listeners to be able to get flows. So if you want to be able to work with like a single one of these interfaces from your common code, you need to be able to pick one, right? So if you're using flow, like your common interface would have to be one of these curvy and wear interfaces. So you could, on your other platforms, you would convert them to that. So you can convert a simple settings instance to a suspend settings, you can convert a observable settings to a flow settings and depending on whether your, so you can kind of share which interface you're using depending on whether all of your platforms are observable or not. So as like a more concrete example, to maybe make this a little more clear. So you might have in your common code, like a expect file flow settings on Android, you use it out of store settings on iOS, you might take Apple settings and call that to flow settings extension. But you wouldn't be able to add JavaScript here because the JavaScript settings of rotation which is based around local storage is not observable. But so to work around that, you can use suspend settings instead. So you lose the ability to have flows in your common code, but you gain the ability to hit every single platform. So in order to kind of have flexibility to pick both of those scenarios, I ended up adding both of the interfaces. So that's a bunch of notes on things I've been working on recently. What else is kind of still on the docket? So I was interested in adding more platforms than implementations. The major thing that's missing right now is desktop Linux. So there's a Windows registry implementation and there's the Apple implementations work on Mac OS desktop. So Linux is the only native desktop platform that doesn't have an implementation yet. I'm very interested in hearing from the desktop Linux developer community about what's a good API to use there is. I know that's a part of the Kotlin community that sometimes feels a little bit underserved. So this is me kind of reaching out and saying I want to give you guys something, but I don't know what the best way to do that is. So if you're someone in that community who has opinions, reach out and let me know. And of course, any other implementations that people think would be useful, I'm interested in hearing about. I don't have any others myself that I'm like are specifically on my radar. But that doesn't mean that more things can't be added. I also want to improve various bits of the API. Some of the not yet as early little implementations could have listener support added. And there's various bits of API adjustment that I've been thinking about. And then at some point this library has got to go 1.0. Right now it's been in ODOT releases for its entire life. And I kind of had this idea in my head at the beginning that it would go 1.0 after like multi-platform as a whole went 1.0, went stable. And I'm not. I think absent any other hard push that's probably still the directory I would be on. But I'm starting to think more and more about what would it look like to start calling this stable and it's completely 100% commit to all the API choices. So that's part of why I've been kind of thinking about what are some of the things that I might do differently because if I do want to release a 1.0 version, that's the moment that everything needs to get finalized. So what else is out there? I've talked a lot about my library, but obviously I'm not the only one. So JetBrains has their kind of core suite of things. So we talked about coroutines and serialization because with the science integrations, I also have the Ktor client, which is your kind of like common HTTP client. And all of these are at post 1.0 releases now. So JetBrains is fully committed to them. They're kind of core APIs for pretty much any reasonably scaled multi-platform application. They also recently added a version of a datetime API. So definitely check that out if you haven't because that's something that you've been asking for for a long time. And then they've been working on at kind of like lower levels of priority, I.O. and atomic APIs libraries. And of course, there's community stuff. And I used to, when I had this slide, like list out not every, I originally when I had that very first time, I had a slide like this. It was like what is pretty much every community library that exists or one of some of the major ones. And now I'm just going to give you a couple links to community media and list and to the official mobile multi-platform docs, which has an ecosystem page with a bunch of libraries. The existence has gotten pretty big and that's pretty exciting. There's a neat movement happening on some larger and larger scale libraries that you didn't see a year ago. So like Square, for example, has their OK.O library and they've been adding file system support to that. And yeah, like other big things like that, the Apollo team has their multi-platform Apollo client, GraphQL client, which they've been talking about a lot recently. You're seeing kind of like larger organizations with larger scale multi-platform libraries enter the space, which is a sign of how it's maturing. It doesn't mean that there's not still room for you to add your own contributions. The thing that attracted me, that kind of like got me to write multi-platform settings originally was the fact that there was this kind of like completely new wide open ecosystem, which is a really neat opportunity to make your open source mark and kind of like jump in and do something before anybody else does. And it's definitely harder to do that now than it was two, three years ago, but there's still plenty of room for more things to come in. So I definitely encourage you, if you're interested in open source in general, to think about doing stuff in KMP because it's pretty fun. So thanks for coming to my talk. I'm happy to answer questions either through the conference platforms or on Twitter or the Kotlin link Slack. You can find me at Russ H. Wolff and I'll also have the slides posted if you need to refer back to them. Thanks.
Software development is hard. It’s even harder when you’re building libraries that other developers will depend on. I’ll talk about my experience with library development in Kotlin Multiplatform, trying to highlight challenges I’ve faced and mistakes I’ve made. We’ll look at this through the lens of recent updates I’ve made to the library I maintain, as well as the current state of the wider Kotlin library ecosystem
10.5446/53590 (DOI)
Hello everyone, welcome to the session on start with the Kotlin flow. Thank you all for joining the session today. My name is Abhisheesh Srivastav and I have worked with companies like Samsung, Ghana and Dream11 and I have been coding in Android since Ice Cream Sandwich onwards. You can also find me on Twitter with handleabhisheesh underscore Shree. So let's look at how we can perform long running tasks in Android. So there are various ways of doing it. Either we can do it using threads or Kotlin core routines or there are a lot of third parties libraries out there which we can use. So let's look at how we can do long running operation using threads. So assume we have a function compute which does some long running task and does some heavy calculation and after the computation finish it returns a result. So we can wrap this in a lambda and pass it to the thread object and once the compute is finished then we can pass the result back to the callback so that the caller is notified. And if we have to do this using core routines we can either make the compute function suspending and call it from the core routine scope and also we can like wrap this in a function and then call that function from the core routine scope. So on the left you can see how we do this with thread and on the right you can see how we are doing this with Kotlin core routine. Now let us assume like if we have to perform some another long running operation once the first operation finishes then in the thread way we will have to pass the lambda. And after the callback is finished we will have to again invoke a new function call. And with the Kotlin core routine we can sequentially write all the function and we can get the result of the function and then we can easily pass that result to the second function as well. Again if we have to wrap it in again new callback so this would lead to a callback help and also with the Kotlin core routine version as well this would happen because of the compiler magic we are not able to see it but how actually it happens the function is passed with a continuation object and once the function finishes then the continuation object is resumed. So which actually would result in a callback help. Now let us consider a case where we have to perform regular updates. So assume we have a repository which is being backed by a database and the repository is a publishing frequent updates and based on those updates UI has to update. So can we do it using Kotlin core routine. So since we know like suspend function only returns a single value and does not work with the streams of data so let us look how we can do this. So here comes Kotlin flow to the rescue. So let us look at what is Kotlin flow. So flow is an asynchronous data stream which can emit multiple values and which would either complete normally or with an exception. And we have been hearing a lot about streams so let us understand what streams mean. So streams are a sequence of ongoing events ordered in time. Stream could be of user click events could be of database updates or could be of anything we can make a stream. So in Kotlin flow we have three entities one is the producer then we have an intermediary and then we have a consumer. Producer and consumer would be there but intermediary could be there or could not be there so it is kind of an optional. So producer basically produces data and consumer basically consumes that data and if you want to transform the data in between we can apply some function over it and that is actually called as intermediary. So let us look at the flow API here. So through the flow builder API you will have to pass up lambda and from that lambda you can basically emit values. And usually flows are cold what it actually means like if you see the block on the right code block on the right hand side. So if we run this we will see it does not print anything and the reason being the collect operator has not been called on it. So flows are activated only when the collect app operators are called on it. So it will start printing once we call the collect. Now let us look at flow APIs. So we have an interface for flow and inside that which we have a suspending function collect and that collect basically accepts a collector so that we can directly pass in the lambda and can emit values and flow collector also has a suspending function emit with which we can emit values. Now let us look at like how we can create flow. So we have already seen using the flow builder API and also if you have a varrars of element you can use flow of function and this would basically create a flow from 1, 2, 3. And let us suppose you have a list so you can directly call the extension function as flow which would convert that list in a flow. And we have a channel flow which are used with channels. So now let us look at the intermediate operators which we have. So there are a lot of operators available I have just listed out few common ones. So we have map, take, zip, buffer, conflate, etc. So let us look at map operator. Consider the example given here. So here we have a flow which is emitting integers from 1, 2, 3 and then on the flow we have applied a map operator and inside the map basically we are multiplying the value by 2 and then finally calling collect on it. So what would happen is like it is going to emit a value then a map operator would be applied on that value and that value would be printed after. So you can see here like first emit 1 then map 1 then collect 2 then again emit 2, map 2 and then collect 4 and likewise. So let us consider a case where we only want to show the even elements basically like filter out the odd elements. So we can apply filter operator there and we can specify the constraint and whatever elements met that constraint would be passed to the downstream. So here if you look at we have this would emit 1 then 1 modulus 2 is 0 it does not satisfy so it will not be printed then 2 modulus 2 satisfies that so it would be printed. Now let us look at buffer operator. So here you can see the flow builder is reducing an item at the rate of per 100 millisecond and collector is collecting at item per 200 millisecond so the collector here is slow. So if you run this block of code you will see that it prints 1, 2, 3 and the time it would take to print is like it will take first delayed by 100 then it will be delayed by 200 total like 300 multiplied by 3 plus some offset value. So it will nearly take around 900 millisecond but if we apply the buffer operator here so what happens whatever value emitter is emitting that would be collected in the buffer it will be stored in the buffer and once the collector is ready to accept it then only it will flow in the collector. So this program would if we run it will take around 700 millisecond roughly because first flow would be delayed by 100 and then the collector is delayed by 200, 200, 200, 600 and in the meantime emitter has already filled all the values in the buffer. Now let's look at conflate operator so what it does is basically it since we can see here like the collector is slow by 300 millisecond and flow is producing at 100 millisecond. So only the terminal events would be collected or the most recent events would be collected and intermediate values could be skipped in this case so if you look at like because the first value is printed then collector is delayed by 300 millisecond then second is emitted that would be ignored and the third which is the last one so that would be collected by the collector. Now let's look at terminal flow operators so we have collected we have already seen collect here then we have single so basically if we try to emit more than one value single would throw would throw illegal state exception then we have reduced so you can provide in a accumulator and some function which you want to apply over there then we have a tool list so whatever values has been emitted you can pass it a list and those value would be populated in the list and then you can use that list then launching launching basically accepts a coroutine scope and the flow would run in that coroutine scope so it's an extension function written over Kotlin flow. Now let's look at the flow properties the number one is like context preservation and then we have exception transparency and these properties are very important and these properties basically ensures that the flow code is readable and is and can be modularized in such a way that you can independently develop the downstream API and the upstream API these are independent of each other. Now let's look at context preservation so consider a case where we have a coroutine scope with main dispatcher and then inside the flow builder we are changing the context to dispatcher.io with context operator and then calling collect function on it so if we run this block of code we will see this will result in illegal state exception and the exception would be flow invariant is violated and the reason being so it is the API the design such a way that the flow builder cannot change the calling context so context should never leak in to the downstream APIs. So if this would have been allowed so consider like your performance or some operation you have launched a coroutine scope and then calling collect so that can result to some irregular states because you will always have to be aware of the flow builder where it is actually running in which context. So to preserve that the APIs are designed such a way that the context is always in scope of the calling where the flow is being launched. And now let's consider that we want we actually have a case where the collect has to be called on the UI thread and in the flow builder we want to perform some heavy operation which is going to block that thread. So we can do this using flow on operator with the flow on operator we can basically change the context of the dispatcher. So what actually happens whatever code you see above the flow on is called upstream and below the flow on is downstream so flow on operator only affects the scope of upstream flow it is not going to affect the context of the downstream flow. So the emit function this would be running in the IO dispatcher. Now let's look at exception transparency what it actually means. So consider a case here where we have a UI scope so UI scope is a scope which is tied to the lifecycle of the view and then we have a data flow function which basically gives us a flow and we are calling collect on it and whatever data we are getting from the collect on the basis of that data update UI is being called. So it could happen that update UI could throw an exception or the data flow could throw an exception. So we can wrap this in tri-catch block and can catch the exception or you can like pass the coroutine scope exception handler or you can catch there. And we also have a catch operator so basically whatever exception happens in the upstream would only be caught and whatever exception which happens downstream would not be called. So this is the preferred way of doing it so that the downstream exception always propagates back to the collector and it is intended that flow builder should never catch exception. So this is called like this is does it maintain the exception transparency property. Now let's look at flows in Android how we can use flows in Android. The recent version of retrofit supports a suspend function so using the suspend function you can create a flow also and you can call emit from there and flow APIs are integrated into many jetpack libraries and other reactive libraries code like whatever code you are using other reactive libraries can be easily migrated to flow because the semantics is very similar and all the reactive libraries kind of have a very common semantics. So and the flow APIs are integrated in room so if you want to be notified of the database changes so using room you can do that all you have to do basically just make the return time from your DAO to flow to flow of T and then those would be converted in a stream of flow. And using this you can observe the changes and can act accordingly and the flow APIs are integrated with live data so for doing this you will have to add the dependency of live data KTX and then basically there are a function available which basically converts live data to flow and you can call live data dot as flow to convert it to flow and then there are extension function which you can call on flow to convert it back to live data or to convert it to live data so you will have to call dot as live data and it will return your live data object. So now let's look at how we are going to unit test our flows. The way you unit test your flows depend on the module or block where you are using flow where the flow is being used as an input or as an output so by input I mean like flow is acting as a producer it is emitting some data values or consumer basically like it is collecting some values. So how we can test our producers so we can create fake producers so let's suppose like we have a flow producer contract so you can create a fake flow producer and you can override the flow APIs there and can emit test values from there and inside your test inside your test will have to use the keyword run blocking so what basically run blocking does it it creates a test coroutine scope which have a test coroutine dispatcher and test coroutine exception handler and accepts a lambda test body in which the test executes and also this makes all the suspending function execute immediately but it doesn't account for delays so if you have provided any delay so you will that delay would be added and if you want to avoid that delays then you can use run blocking test. So here if we see like a producer we have created and then first item you can call on flow dot first this would return you the first item and then basically you can have a assertion logic there and you can verify since you know that this is emitting item one here we have assertion on the first item whether it is equal to item one or not and let's assume like the flow is returning multiple values so you can directly convert it to the list and then can check with whether list you have obtained is equal to the list which we have created here all data and also like you can use all the operators there to intelligently test your flows so basically like if you want to set test the second item so you can call drop which will drop the first item then the second item would become the first and you can then have assertion or you can convert it back to the list and add your assertion logic or to list or you can look at all these API then can unit test those and can use these in your unit test. Thank you everyone for joining the session today it was an amazing experience speaking virtually in front of all the Kotlin enthusiasts out there. Thanks.
Kotlin flow is a new stream processing API introduced in kotlin. In this talk we'll learn about flow API's, internal details & how flow can be used to handle asynchronous streams of data. In this talk we'll cover -- Overview of flow -- Why we need flow -- Internals and deep dive in flow API's -- How a flow can be created -- Using flow in android apps -- Exception handling in flow processing -- Testing flows
10.5446/53591 (DOI)
Hello and welcome to first time 2021. Today I'm very excited that I will talk about something that I love and it's the future. If you are guessing, wait, wasn't it supposed to be Kotlin? Don't worry, don't worry, I love Kotlin too. So it's not the future in the broad sense. Unfortunately, I have to limit the subject so that it will be about dependency management as you can see, same form now 2021. First half of the year, let's say. I will use a few abbreviations in the slides so you can focus on the content and not just on the very mouthful dependency management rules. Okay, so what we'll talk about, we will see common non-problems, some that you already know about, but you might not know the best solution, maybe not the best, but some interesting solution at least. So the solutions now, solutions that we might have soon a little. We also talk about less on non-problems that people usually just ignore just like I have did until now because I have no care solutions. And so what solutions might come later, at later points of the future to resolve these problems. And it will also be linked to a few ideas. Okay, so one common problem is upgrading dependencies. So let's see how the process is with some soloco in Kotlin. So when you need to upgrade dependencies in a Kotlin project, that project might have multiple modules. So for each module, you need to upgrade the dependencies. And in each of these, you need to find the build.gradle file or build.gradle.kcif file if you like Kotlin that much, I do. And here, you have the dependency declarations. For each of these dependency declarations, for each of these modules, you have to ask yourself, okay, should we even try to see if there's any updates for that thing? For example, some updates might not always be updated because they might be kind of abandoned or not updated often. So sometimes we just want to save time and see we don't bother, we just keep it and we go on to the next dependency declaration. But if it's not the case, then we search. And we search how? Well, by going on to the web. So that's sometimes lots of focus sometimes because as we jump into the browser, we are exiting the ID and it's also opening the door to some distraction. So we search for any available updates and if there's none, then fine. On to the next. Otherwise, we have to pick the new version because sometimes we don't want to pick the latest. Latest is not necessarily the greatest. For example, it might be an Alpah version or it might be a stable version, but that has a major bug and that is pending or hotfix or a version that is incompatible with your project and you know it. So you have to pick a new version or just skip it if nonversion suits what you have to do now and then you replace it. And so you have to do it for every dependency declaration in every fight. And I will talk about a few problems in this, but you can already see that there's a lot of steps that the developer needs to do and multiple times in a loop. And once of course, you have done that, you need to address issues if any. So that's pretty simple. You check for issues, you have a list of issues. Maybe you don't keep a list, but roughly that's what you do. It does not issue, then fine. Upgrade has been probably so successful unless you didn't check enough for the issues. And then if you have issues, you need to fix them all and once you're done, you need to check that there's no new issues. So that's something that can be recursive. So you can see that we might have an infinite loop. So that might be a problem, but it's just part of the job at this point. Let's switch back to the slides. So you can see that maybe there's two manual steps. It's also only convenient to upgrade to not the latest versions. What I call an intermediate version. For example, you want to upgrade to 1.2.9 and not to 1.3.0 because you need more time to check that 1.2.0 is not breaking your project and is compatible with all of the other dependencies that might also use that one that you want to upgrade. And so the fact, I say it's inconvenient because you have to keep a lot of tabs open. I mean, files, because for example, if you have 50 modules, it's something that is not so unusual in a real world project, in a professional project, then you also have a bunch of tabs in the browser. And so to make a test with a little version and check, okay, for that, the dependency there's these many versions. It's not always easy. It's not all the things right as the right place. You have to switch to juggle and to run. So that's why I say it's a little inconvenient. Also other problems, you might have, of course, compatibility puzzles and also regressions. So regarding these many files to open, some people have opted to something called build source, something in Gradle that allows to, like you write some code inside, you can have some constants so that might be the MyVan coordinates. So what allows to take the build system, this is the dependency that I want. You can centralize that into a single place in the project. And then you use this code in all the build.gradle or build.gradle or KCS file. So that's great because now once it's set up, it's centralized. And you can update all of the modules at once. Usually you don't want to have different versions because it will make the project more complicated. And there's rarely any reason to, sometimes there's some reason, but it's rare. And you can also work it around that you want to keep some kind of technical depth in one part of the project, but not in another part of the project. The problem with that is that you lose the upgrade available warning that if you are in an Android project, Lint will give to you. Fortunately, you can work it around with something called the Gradle versions plugin, which is not a first party plugin. And I'm showing it to you right now. It's made by someone called Ben mains. So thanks for that American technology. This plugin, once you have added it, it will show you a report in the form of output in the console. For example, here it tells you that Goose might be upgraded or it can also give you a JSON report or even an XML report. But you still have to manually update the dependency notation that you have in the build source. But that's a great thing to discover the update without having to search manually for every dependency that you have. Also, you might accumulate some unused dependency notation if you were using dependency in the past and you are no longer using it. And that might be a little of a problem because unused stuff in the code just makes the code base harder to navigate, not really for the computer because it will not have an impact on the size of the final app. So it might have a little bit of an impact on your machine, but that is really, really tiny compared to the grand scheme of things. But it seems like you have a lot of things where actually maybe you use half of it. So that's a little of a problem. I think it's convenient because ID support for resources is a little complicated, so that's why it's not always finding that it's unused. But the biggest problem is what you are seeing now by far of this approach is that it invalidates the whole bit because Gradle, once you edit the resource, every time you see a change, it doesn't know if you are changing the rules of the game. So it doesn't know how to handle incremental compilations. It's unsure, so it just invalidates the whole bit of every single module even if what you updated was supposed to target only one module. So that costs more CPU time and that can be really significant, especially as your project is larger and more complicated. You have to do what we call a fully clean build. That's quite costly. Another solution is to use version ranges. I don't recommend it because it leads to unreproducible builds, so please don't. But there's another problem. You don't even know which version you are using. I mean, it's linked. So you have plus, you don't know if version 1.0, 3.0, or 17.3. You don't know. So it's really hard to be sure what you actually have in your program. And also it's implicitly up to in unstable version, potentially unstable version, and alpha, for example, or even a snapshot. If you use a snapshot for one of the dependencies to test something, then it might also up to into a snapshot of all other dependencies that can create a lot of compatibility issues, a risk, not knowing what's going on. And just like you don't want to have a doctor that doesn't know what he's doing, then it's likely that as a software developer, it's better that you know what you are doing. That said, if you use dependency locking, you are wiping all of these build reproducibility issues. One issue that remains, though, is that it doesn't help in selecting the version that you want. So if you remove the locking to upgrade, it will just fix the latest, and that still might be an alpha or snapshot version, or not what you were looking for. So there's another solution. Because of these that were unsatisfying for some people, like someone in the community called John Michelle Fier, and I want to give a lot of credit to him because he started a project that I kept working on, as I had the same issues. So that other solution that I am going to show you has no problem regarding build cache and validation. So when you upgrade, it just tells Gradle, this is a new version, and Gradle will just invalidate what needs to be invalidated and not the whole build. My version, just like in build source, in a readable file, which is not Kotlin file because Kotlin files that you have in build source are too complicated to edit automatically. Yes, we edit this automatically because that's where automated upgrades lookup takes place. There's also fewer steps and less CPU per upgrade overall and a few bonus features. Finally, it's just a Gradle plugin. So I want to show you in Pseudo code how it looks like. So we have seen how it was if you just do it the old school way, but there's another way, which is with refresh version. So you have to run the Gradle task, which takes usually less than 10 seconds or less than 5 seconds, depending on the size of the project. So for a big project, we have seen that in 10 seconds, it's done. You open the single versions.properties file, and then for every version entry that has available updates, it's very easy to spot them. You will see because I'm going to make a live demo in a short time, you have to pick a new version considering the risk, and you can upgrade very easily by deleting a few lines of the version that you don't want and just keeping the version that you want. Of course, it doesn't handle all of the other issues, but by making the upgrading process simpler, it gives you more time to do it more thoroughly and picking really the right version instead of just trying because it takes too much time, just trying your best bet to save time. Back to the slides. So we've seen the Pseudo code, which was cut short, but I want to show you a real world demo. So here I have the Kotlin library's playground, which is a third party project made by Jean-Michel Payard and I, where we are demoing our project. So here we have a few modules. Let's speak, for example, Kotlin JVM modules. We have a build.grade.kc.s file. We have a few plugins, and you can see we have here the dependencies. One thing that you can spot is that you have no versions here. What we have instead is what we call a version placeholder. We are using, as a convention, the placeholder because no one else is using it and Gradle doesn't fail when we use this. So our plugin will take this and we'll replace it with a version specified in another file that we'll show you just after. That other file, I will just show it to you now, actually. Here it is. This is the version that's provided this file. So if I go back, for example, and I search for Klaxon, you can see the version could be underscore. What I have to do is just to do finding paths in the IDE, and then I see that in the version of the properties, here it is, I'm using the version 5.4. But now maybe these have updates. So to look for the updates, you have to open the Gradle tool window or straight up launch the Gradle task. I'm doing it from the IDE, but you don't have to. And in the help category, you will find Refreshed versions. So let's execute it. It will take a few seconds. It's doing a lot of requests in parallel, so it's fast. And boom, in five seconds, it's done. And you can see that now there's a lot of edits. So I'm going to close this window so you can see it better. There's a lot of edits. It is hard just to comment. So what it did, it edited the file, but it doesn't edit the build. Like the dependency are staying the same. You, as a developer, have the responsibility to make the final choice. You see all of the updates that are available. So for example, if I want to upgrade Kotlinx coroutines, you see that I'm using version 1.3.9. Let's see a place where I'm using coroutines in that project. Here, I'm using coroutines. You can see that it's from Kotlinx.coroutines. And I want to use something called await-consellation, because, for example, this code is causing a problem, so I want it to just suspend forever. But it's not there. Because why? because it's not in version 1.3.9. It was introduced in version 1.4.0 of Kotlinx coroutines. So what I'm going to do is to upgrade to the latest because it's stable. And I know by experience using it in other projects that it will not cause any issue. I have to refresh the Gradle project to reload it. So I'm doing it. It will take a few seconds, maybe five more. Okay. So now let's see if the autocomplete is okay. And yeah, now we have await-consellation, because it's in version 1.4.2. So this is all it takes to update dependencies with our plugin. There's other advanced features. One of them is that here you can see, for example, when it comes to code, actually, we have already dependency notations. So you don't have to do your own build source. You can just use what you provide for popular libraries. You don't do it for all the libraries, but we do it for what libraries are the most popular in the Kotlin ecosystem, because that's our main target, but it's the rocks for all the Gradle projects. No problem. And so here, if I want, for example, to use the Android version, let's say I would be in an Android project, then all I have to do is start to type it, and then you see, boom, I have Android. And it's that easy. It will, under the hood, it has the version placeholder. So this is something that you might try. Next time you want to upgrade dependencies, there's some paths to upgrade. You can visit our website where we have a guide, how to set it up. It's not really complicated. It takes a few minutes. And definitely less than an hour. We have a GitHub page. It's fully open source. You can just search it on any search engine, Google, Bing, Quant, you get covered. And one thing to know is that we are actually in the process of making an open source software organization based on the people that know how to make Gradle plugins like this one. And so it's in the Gradle open source software organization GitHub. This project will move there soon. But we want first to get the contributing guide done before announcing that moving of ownership. So it's a shared ownership. It's a MIT license. So you can do whatever you want with it. But please be nice. And it's even more helpful for Android projects. I want to show it to you. If you are making an Android project, we have almost all of the dependencies from Android X. You know Android X, it's more than 70 families of dependencies. So let's say you want to use lifecycle, code extensions. I think it's just runtime KTX actually. You can do it. And that's, it's that easy. All right, something that is close to what I just showed you, it's the Gradle version catalog. So it's not about what you have seen in the version that properties file when you see all the comments for available updates. But it's an alternative. A first-party thing actually that will kind of overlap with the dependency constant and what you use some people do in build source. So that's a great thing. It's coming in Gradle 7.0. Probably I'm quite certain that it will be an incubating feature. But it's nice. There will do some code to have it to get autocompeting the ID and also nice things. It's compatible with the versions despite some overlap. So if you want to use the version for the versions that properties thing, then you can do it. No problem. You can also use the version that's older. And we're happy that the entity dependency hell community is going. Okay, I want to talk a little about the risk of third party dependencies beyond these problems of upgrading, which are very common. So there's always a risk of API breakage on upgrade. In the Skylight ecosystem, they have some automation for this where dependency can provide migration rules or code or script. I'm not sure. But it's something that I wish we would have facilities in the code and ecosystem. There's possible behavioral changes that might break your app. When you upgrade, you might have enforcing bugs. Sometimes you only discover that in production either because you didn't test enough or because the customer has something really different that was enforcing. You also might have runtime vulnerabilities and something that is a little sad is that we don't have notifications. It's quite rare in the content ecosystem so far to have some runtime vulnerabilities in the third party dependencies. But I know that it happened in the world of JDM ecosystem sometimes and there's no notification. So you have to figure it out on your own. And I guess that we could do better. So it's something I wish can be improved. Personal risk is abandoned from authors because you might not have the resources or the time to take back the maintenance of something that you depend on and that might still need to be upgraded because some other API that depends on evolve and break in some ways, be it behavior or API. There's also risk which is ownership and policy changes for better or worse. Governors that is taking ownership of a project you are depending on might introduce more strict rules for increased security, less bugs or whatever. But there might also be more lacks and that might, when you upgrade, decrease the quality of your app because of the third party dependencies. So that's also a risk. That's a problem and I hope that we can find solutions to get this improved. And I guess I think that with Internet, just like on the Internet, you can have reviews for a restaurant, for example. Maybe we could have that for third party dependencies as well. Some community could provide some information about how it was to use this thing. Was there any performance problems, security issues? This thing usually you are blind when you come to it. Also, there's something that some projects might get some renaming. For example, the migrant coordinates might change and this is also a risk because you might lose track that there are still updates but you need to change something. Usually it also comes with API breakages. That's also an issue. Another issue is library discovery. It's not that easy to find something right for the long term for a given problem and warning your own is not always a possibility. For that, there's something that already exists. I don't find it perfect but I want to show it because it's something called package search by JetBrains. So package search by JetBrains is something that is still kind of work in progress. So as you can see here, I can select a multi-platform. I think it's a great thing because this is targeted at cutting project but when it comes to multi-platform. For example, let's say I want to use GraphQL. So let's search for GraphQL. And here, it finds me Apollo. So great. But as you can see, there's a lot of things. I wish that it would get a better sense at the organization of the project because as you can see, there's kind of a duplicate for iOS. For example, there's multiple CPU architecture depending on if you compile for the simulator and an Intel Mac or if you compile for an iPhone running a 64-bit ARM CPU, there's multiple versions. So I think that this might be improved but that's something that is nonetheless very interesting and there's ongoing integration with the IDE and I guess also part of it already works in IntelliJ. I'm just not using it yet because I don't need it for now. But that's something really interesting. You have some key information here. You can see that there's also a tag on Stack Overflow which is a great thing to know. It's really easy to copy-paste the thing although it will not be managed by refresh and by default and it doesn't deal with the update but that's another problem. So one can dream for the perfect dependency discovery tool or platform. Here's a few ideas that I have. So of course I would dream of something that is ergonomic, capable and beautiful. Maybe like Package Search, who knows. But if it's not really ergonomic or not beautiful, people will usually use it less especially the world or community of developers because now there's a lot of people that come from different backgrounds and don't always face all of the complexity at once. It's important to have a good first experience. I wish that such a tool would have filtering and sorting. For example, when I was in Package Search there was no way to say, okay, I wanted to do it with a platform but I just needed for iOS and Android for example. Or I just needed for the JBM. I don't really need to be in the platform but I wanted to run an Android and also a desktop JBM. There's no way to ask for this so far. It is just plain text which is not the best way to describe your requirements. And you know that it would just do simple text search. So I wish that a tool for discovery would have something like that and also could see that any incompatibilities with the project. For example, I could say, okay, for this module in my project that you know that this module compares for this architecture, this platform and this platform. I wish that you could say, okay, so we are narrowing it down to what you need. And if we don't find it, we will tell you and we'll say, okay, this is not compatible. For example, because this depends on APIs not available on Android or this requires to run in a native environment. It doesn't work on Android. That might be some stuff. And something that I would really like is feedback of trusted users. Trusted users and I think it's something a little complicated but we also kind of nailed it in the open source community because some people might just say, I don't like it because they didn't dare to read the documentation for example. So that's why it's important to have users that are trusted and when they put a review that it's authentic. I wish that a tool would have that. There might be other ideas for that to work well. We will need some metadata. Some metadata about for example, software compatibility. So operating system, it depends on transitive dependencies but also CPU architecture, by code revision is also something that might be a requirement. What are the transitive dependencies and the API subset? So like not we depend fully on this but we depend on these parts of the dependencies. So you know that if this transitive dependency break that part of the API, it's a problem but if it breaks another part of the API that you are not using, that it would not be a problem and that could make for more automated upgrades while you could tell you you just have to test it but it should not break the API. We could also use API diff. There's something called the JAPI CMP which stands for Java API compare I guess. It can already do really powerful API diffs but it's not used in upgrading tools so far because upgrading is still quite manual. But there's also dynamic metadata that could be used. So for example known bugs and affected APIs, this would be defined by the maintainers of the project and also alleged bugs, so reports that have not been yet acknowledged or verified. And reliably reports binary compatibility commencement from the authors. The author might say okay if we stay at the version I'm promising I'm not breaking compatibility and actually we could also enforce it with specific tools if we wanted to. So that was fast and now I'm taking questions and after that talk I will also take questions. You can hit me up on Kotlin Slack, on Twitter and of course on Matrix right now. Thanks for watching for listening.
Dependency management in the Kotlin and in the JVM ecosystems is great, especially for Gradle users, but there's room for improvement. Some tasks, like upgrading dependencies to the right versions, are still tedious and time consuming. There's also compatibility gotchas because of the lack of metadata. This talk will start with a mention of the different problems that come with dependency management in real-world projects. Then it'll show how the developer tool refreshVersions (MIT licensed) tackles some of these issues in Gradle projects, saving a lot of time when upgrading dependencies. Finally, I'll talk about what the future can be for dependency management, be it from new features in future Gradle versions, or tools or conventions that the community can create to improve the status quo.
10.5446/53592 (DOI)
Hello and welcome to This Spring Shall Be Challenged, does it have to be spring all the time. My name is Hager Schanhoer, this is my very first talk, so please bear with me and enjoy the talk. A few words about me. I'm a professional developer for over 15 years now, so I started coding in the fourth grade and then, well, the rest is history as they say. I really, really enjoy it. I'm more than a decade on the JVM, so back in 2009-ish, I started to use Java mainly and since 2019, I'm totally in love with Kotlin and had quite some projects with it. I'm also the founder of Schanhoer Software, so we are a consultancy company providing new services around selecting text stacks and implementing services and we are remote first, so if you're looking for help for your projects or need some support in finding your text stack, just get in touch. Beside this, I'm also co-organising the virtual Kotlin user group and the Kotlin user group and I do support for quite some time now, the Java event. I will put some links for these things into the, I think it's the last slide, so if you're interested, just check them out and get in touch if you have questions. Also, and last but not least, I'm also a podcaster, so my podcast is coding with Holger, the link is at the end as well and I do feel like I want to do one or two more, so let's see what time permits. Right, so what are we after today? Today, we will have a quick view on how I see, so it's a really opinionated view on how current projects are essentially specced or how the decision comes to place on what to use, which usually means spring. Then I will show you what the challenge will be. It's don't be too excited, but it is taken from a real world project I just worked on recently. I will introduce it to the contenders and show you the findings I had during really, really lovely time implementing the same thing over and over again in four different frameworks and give you a very opinionated and well, yeah, a very opinionated conclusion on what I found out. So what I see in projects nowadays is that if they are not already in progress, I see loads of managers deciding on what frameworks to use or technology to use and this is usually based on Google results. Interestingly, this was one of the first projects here in Germany I did as a freelancer, so they wanted to change their text tag and they did some Googling, so they thought they want to go with Kotlin, which was great, but at the same time they said we have to use spring, which turned out to be not the best decision. Spring is usually seen, besides Agile in Germany, to be a silver bullet and I would strongly disagree with it. We will see some interesting findings towards the end of the talk, but it doesn't need to be spring all the time on the backend. And also what I see is that a lot of companies are more concerned of finding framework followers, so people considering themselves as a React developer or as a Spring developer, rather than software developers with experience on the back, all front and side of things and having worked with a couple of frameworks, so being able to switch between frameworks, which by the way I don't like, obviously. So what's the challenge? The challenge is very simple. So I want to have a simple rest endpoint. It will be a get endpoint and it should return a list of articles with some data attached to it and this data is all hold in a database. A relational database in this case, it doesn't really matter a lot, but just mentioning it, I'm more fan of the traditional relational database, which would be a completely different talk in itself. But yeah, so it will be this one and it looks simple, but it turned out in this project I mentioned before, I teased before, that this was already point of interest for my previous client, one of my previous clients. So they thought that there must be something quicker, it must be possible to be, to have a quicker response. This essentially spawned the idea for this talk. The output will be JSON, so there is some transition or some transformation involved and we want to have authentication. For the sake of this demonstration, it will be basic authentication, but essentially we don't want to have all our services exposed without authentication, right? So I should stop clicking buttons randomly on here, sorry. So what are the contenders? The contenders are four, two, four, yeah, four different frameworks, including Spring. And how did I select them? So besides them being in the media or in one or more projects of myself, I had a couple of requirements to them, so they should be easy to use and easy on the resources. So I don't want to spend too much time on setting things up, so get all the dependencies together and I have to choose the right versions and all these things. This needs to be easy. And also at the same time, they should be easy on the resources, which means I don't want to be, well, in charge to do a lot of optimizations from the very beginning to get them to perform quickly. And yes, we will have to optimize and yes, we have to look into how we code things. But just spoiler, in this case, the implementations I did, and I will give you a link at the end to the code to it, so you will see it's rather easy or simply implemented. This is on purpose, so I didn't want to do any optimizations. I want to see how these frameworks perform, if done bluntly. They should support Gradle, because I like Gradle, because in Gradle, I can have my build file in Kotlin script, but this is a personal taste and it would be funny if they support me, and they usually do, so Gradle was a bigger thing. And they should have Kotlin examples. Well, Kotlin examples, Kotlin bits in the documentation, if they aren't Kotlin or pure Kotlin by themselves like Ktor. And from time to time, we do have, we can deal with configurations or with code differently in Kotlin than we do in Java. So I want to see how we can use them in Kotlin or quickly copy and paste things to get started quickly. They should provide bootstrapping. This is what I meant with, what I mentioned earlier already with easy to use, for example, or it falls into this category. So I want to have a CLI or a web service like the starter, ProRestart, Spring IO, I think it is. So the initial setup where I can say I want to have a web server or I want to have whatever and get a basic project created again. So I don't want to have to copy all the, I have to type all the dependencies and get the right versions. They should be open source, of course. We all love open source. This is one of the reasons why we are here at Foster, right? And at the same time, I headed over and over again that open source literally saved my vacant because I could fix bugs or I can contribute back by fixing bugs. On top of this, all the frameworks should have support for authentication. For example, all off sessions, the more easy the better, the easier the better it is. And last but not least, this is also something I learned very early on. It's always good to have metrics collected. So they should, all the frameworks should provide us with metric collection capabilities. So if we think of Spring, it would be a trader where it's a matter of adding a dependency and then get things like a health endpoint or an endpoint to collect information about a number of requests and all these things. And it's easy to extend. They should provide something like this as well. And most, more or less, it's usually done in micrometeon nowadays. So what a plus we need some little helpers for our contenders. And I chose a couple of things you might be or might not be aware of. So one of the things is E-Bean. E-Bean is a, I think, underdog in the ORM space. And I particularly like the query being subproject of E-Bean. I will show you in a minute. And we use this SNOM. And this, I chose this because I can use this in all of the frameworks either directly or by just using it. I also use coinDI, a dependency injection for the frameworks which do not come within own IOC container, I use Gatling for some basic performance load tests. And Kenny is in front of the service on my Linux VM I used to do some load tests to get a bit more of a realistic view on it. So I didn't optimize it by essentially spun up a couple of Linux VMs, one with a Postgres database, one with Docker on it to run the applications in the Docker container and one to run the Gatling tests to get a little bit more fire under the hood. So let's switch to some code. This is the project I will give you a link to. So if we look into the structure, you will already see what you will find the contenders, your wikator, micronaut spring. There's the performance bit which is the load testing part and shared which holds all of our really, really highly sophisticated code for the ORM which is essentially composed out of two bits. So if we look at this, we have the base model which brings ID and versioning. And we do have two instance which essentially holds information about when a particular entry was created and when it was modified. So nothing really exciting here but you can see that eBean provides a very JPA-ish, can we get this? Here we go. JPA-ish annotation support. So essentially we have this in here and behaves near the same as hibernate and when created, then modified our nice little extra features. And then we have all of our domain models in this little class here or this little file. So essentially we have an article which I already mentioned. It has to have a title. And then it can have an abstract, so essentially a short description on what's going on in here. It will have for the sake of this demonstration one image attached to it via a foreign key. It will obviously have a body. I can have once it's done and it has a publication date and a couple of keywords attached to it. An image is, this is a really, well, a stripped down version of what I had in the real project. For now it only has a file name and original name of an image. So this was much bigger in the other project, but just to keep this as roughly comparable to what I had before and keywords are essentially just a set of additional tags, keywords, categories as you want to call them or however you want to call it. And this is all we need for our OIRM. We do use, so if we go back here, so if we check this, our eBean comes, our query bean comes with a query bean generator which uses the Kotlin annotation processing tool. So this gets generated either by a extension in your IntelliJ or Eclipse or via gradle if you use gradle to build it and then they use kept. So this generates code and we will see this in a second, which generates all the needed stuff. So in comparison to Hibernate this doesn't happen to runtime. This happens during compile time which is a huge performance improvement. And performance tests I mentioned before, just a quick glimpse in here. It's really, really, it's simple. So we set up the HTTP protocol in here. We point it to a endpoint and then give it a scenario, the scenario in here just says, okay, get all the articles from this endpoint here. And we want to have ramp up user numbers of up to 1000 within a minute and it shouldn't take longer than two minutes at all. And then we run this against the, in this case, the remote instance. That's all we need on this bit. So let's head on. So the contenders here we are with Spring Boot and I use Spring Boot in well as I refer to it as Spring Boot. But what I do mean is essentially all the Spring Project Spring Boot combines and bundles with the correct version numbers, right? And we do have some pros and cons on mostly all of the other contenders and with Spring Boot, you all know it, it's around for quite some time. It has a really, really good documentation out there. So if you look into the documentation given by the project itself, but also if you look at pages like Belldom, it's end-on-one-stack overflow. So we have quite some resources there. And it has a huge community which also contributes to documentation and help. And it's backed by VMware. So yes, it is still part of Pivotal, but Pivotal was bought by VMware quite some time ago. So it has a big backer there. It's on the other hand also on the cons side, it is rather heavy on the reflection part. So nearly everything, all of the quote unquote magic happening in Spring happens during runtime via reflection via annotation processing in memory. So this takes quite a hit on the performance side. At least we say so. And it comes with loads of dependencies. We will see this in the size of the FET jar we produce with the simple Spring implementation. It comes with loads and loads of Java baggage. So it is initially written in Java because it was a contender or a competitor to Java E back in the times. And it still has. So they do love Kotlin. They say this and they show this. So for example, with Spring Foo, we get a more Kotlin like DSL for all the things. But also Spring security, for example, with the next version, they come out with a diesel like approach to configure your security, which will be nicer to do, right nicer to read. So everyone who tried to set the security up and get this form in a readable way will know what I mean. Last but not least, it's slow on the startup. And yes, with Spring Boot 2, they improve this a lot. And I mean a lot. But it's still one of the slows out of the bunch. If we look into the implementation, it's rather straightforward. It's all what you would expect from a Spring framework as a Spring project. We have our well, hooking point or our startup point, we have a simple class which is annotated. Then we just started and we have a classic MVC pattern. So the controller holds the rest controller logic. Well, logic, it's a map to slash article and we say, okay, get mapping here. So this is get endpoint, which calls in turn the article service, which is auto-wired in the DIContainer we use or used in here is the one from Spring. Then it goes into the service, which does a lot of really fancy business logic in the form of calling the repository and returning the result. And then we see Qwerubin's in action. So the subproject I mentioned from eBeam, we essentially say QRTiql, find list. So this returns an empty or filled list and QRTiql is generated by kept or by the enhancement program in this case IntelliJ and returns all the articles as a list. So this is more or less the whole magic to get this work. And on top of this, we need it, as I mentioned before, I wanted to have a basic authentication. We need the configuration. So this is something where you have to know Spring, it's a bit clunky, but if you have only one configuration, it's fine. If you have multiple ones, you have to know or you might have to tweak it. You have to be careful on the order these get loaded and all these nice things. So here it's really simple to say essentially just we say on HTTP security, we want to have all requests should be authenticated. We use HTTP basic authentication is what I wanted to say. And we have something here, which is also you have to know it right. So we need a bean which is then used automatically by Spring, which implements the user details service interface. And this provides us with a user details manager. And yes, this is the mouthful, but essentially this here says, okay, we have these two users and this user then has a role. And we use the memory variant of the user details manager. This can be this can get much more complex, but this is more or less all eight. And with this here, we say everything we have in the controllers, but whatever we define in the controllers will be suspect to the HTTP basic authentication bit. So going back to the next contender, it's Ktor sorry for rushing through, but I have to keep an eye on the time. So with Ktor, which is really interesting because I couldn't find any comments, at least not at this stage. One of the very, very big pros is it is pure Kotlin, which just feels and it just feels right. So if you use Kotlin, Ktor is definitely nice. Definitely worth have a look into it. It has a really good documentation. So JetBrains and all the other additional contributors, they did a really good job. It's easy to pick up. They have lots and lots of different examples in there for all the combinations you can think of. It's really nice. This also shows how active the community is. Check out the Kotlin Slack Kotlin links like in this case as well. You find lots of help on there as well as well as on Stack Overflow. And it's backed by JetBrains. So JetBrains has a vital interest in keeping Ktor alive to position Kotlin on the backend. And you can feel that Ktor gets a lot of love there from the Kotlin folks at JetBrains. Before we get to micro now, let's have a look at where's my mouse? Here's my mouse at the code. So let's get this close here and have a quick look into Ktor. It will look very familiar. So essentially we are used coin in here because Ktor doesn't come with its own IOC container. So in the coin space we have to, or we can define modules which then in turn just define what is it what we inject and we say, okay, we want to have single items of these types and the implementations are this one. We could essentially do more fancy things in here, simple numbers, but it's all I needed in here. And then we go in here. So the actual application module we extend here or we implement here is the whole setup for the Ktor implementation I did. We do install coins. So Ktor comes with a module on an application extension for a supporting coin directly. So I didn't have to do or we don't have to do any initialization bits. We want to have content negotiation. So we want to be able to say what goes out of our controllers. This will be always JSON this case, but it's nice to have in here and it's easy to configure. You can see here it's rather explicit and then the next bit is already the authentication part. And authentication, especially basic authentication is rather easy. So we have it here. So we say we have authentication installed here and we say we have a basic authentication, give it a name so that we can have multiple authentication configurations. And then in the basic part we can, we have to implement the validate, which essentially just can check the credentials either hard or against hard strings like here, or we can delegate it against databases. And then we get into the implementation. So we need the article service injected here. So this is coin network and we have routing. So this is a pattern we will see with the other frameworks as well. So we define routes. And I think with spring we can already do this or we might be able to do this soon with spring through or one of the upcoming spring and we see bits, but don't point me down on this one. And we say, okay, we want to have, so this part of the routing is needs authentication. We map the authentication against the authentication we did up here. And we say, okay, we have one get endpoint, which is slash article slash and then we want to respond with the result of whatever the service gives back. So called respond is already K to R. And then the rest is something which will look familiar because it's doing the same thing. So we have article service here. And this called the repository or gets the repository injected and then calls a repository to get everything. And we use Q article find list again, right? Cool. So then we get to micronaut. Micronaut looks very similar to spring. So it comes with with lots and lots of additional, additional projects like spring. So for example, they have micronaut data, which is well inspired by spring data or real rails. And they come with their own IOC container and all these bits and box. But the big thing which distinguishes micronaut from spring is they use the as well the kept capability. So they do annotation preprocessing during build time. So we get all the things spring tries to figure out during runtime during compile time and then have a faster runtime. And this also results in the fast startup. So definitely faster than spring. And it's backed by its own company object computing. I think they do consulting as well. The only con I could find is it brings some Java baggage. So again, we have the to deal with not quite so straightforward nonability. And one or two other things we have to be careful with looking into the implementation and keeping an eye on the time. We should go into too much detail, but essentially micronaut then looks very similar. So I tried to stick to the same thing here. So we this is the startup. So we it's it's build a pattern they use. We can configure most of the things here. We need to to to get this thing rolling. We say the controller is in we use what's called packages. We say look into these controller. So we have the controller here and this looks already very familiar. So with the micronaut comes with its own set of annotations, which are then interpreted by the kept plugin and we have the one thing I had to look at is how to do the authentication, which wasn't that straightforward. But I found a basic example and adopted it. So the example they deliver with or might not delivers in code is a really simple authentication basic authentication provider, which checks that the username equals the password. And I did use this one to create this authentication provider, which then checks for our authentication details we saw before. The rest is the same except that we don't use coin, but the the icon to our IOC container from micronaut. And we see here again at singleton is comparable to at compartment at on the spring side. Going further, so Yobi is my my secret favorite. I came across Yobi thanks to a good colleague of mine. And it flashed me how fast it really starts up. And it's really, really, really fast. And I should stop using really, but it is really fast. And it comes with a minimal setup. So it's it's it's really just bare minimum we need to get something running and working. And it comes with a very Kotlin eskity as a that's really it's it's rather nice to use. And they have a mindset of using thin extensions extension layer. So we we delegate the work we want to do outside of simple HTTP request mappings and implementing features to whatever we whatever frameworks we want to use. So for example, the the container I use is coin, they don't have a dedicated extension for this but it's easy to set up. But they have a thin layer fordues, for example, and they do have a pack for J P a C for J extension, which then is used for the authentication bit and pack for J is a huge well, Java based but it's a huge authentication framework. The cons the documentation is very spotty. So if you if you look for for things and you always have to be careful if it is for version one dot xx or two dot xx. And this makes it a bit hard at the beginning. It seems to have a small team. It's not that big, but I hope they grow. I think they also start to create a company around your B and with two dot xx they throw away loads of integrations they had for one xx because they changed the way the extension works. And this is again coming back to the spotty documentation of the easiest bit plus it you'll be now says it's a Java and Kotlin back in framework or micro framework, but it still has loads and lots of Java baggage because yes, it is written in Java, but I spotted a couple of of Kotlin implementation bits there already. So what did I find? I hope you don't mind that I'm not going into the code, but I have to keep this short. So I found that obviously, well, if you look at the initial jar size and start up that obviously what was what we essentially could expect that spring has the biggest jar size. So it's it comes with lots and lots of dependencies. So they take their their toll on this one with 27 megabytes is not the biggest one. But keep in mind that this example is really simple and really small. It also has a startup time somewhere between 1.5 and three seconds. Yes, it is fast. And this these these figures were taken on my MacBook Pro. I think it's a 2018 i9 6 core 32 gig. So not not the smallest one. So it is yes, it is is okay is fast by it. There are faster options there. And the initial heaps of usage. So I used your kit to to look into, okay, I'll start what does the usage look like. It is the biggest wave spring. It's 200 megabytes. And then you see the other figures on here. So the clear winner on the jar size and the startup time is definitely Yobi. So for Yobi, I had to implement some small help us which slowed it down already to get the time out. So it's it's somewhere in the middle seconds, it's really fast. And the smallest jar size because your be doesn't come with loads and lots of dependencies. I was a bit surprised about the heaps usage but we will see that it will go down a bit later on. And here we go with the memory usage once I did a request and a request here returns roughly 300 or 350 results. So I had a couple of ours as feeds I collected from a net politic.org and from from Spiegel and just put the some some details from there into a database. And I will I think release this code as well. Later on anyways. So going on here. So the winner here is if you look at into the heap usage, it's essentially it's K2R, which then cleans up the space rather nicely. And also when I use getling to to request, I think 100, well, do the same request with ramping up using the number of 100. It's still doing well on the heap usage. So it doesn't have lots of memory going around. So I was a bit surprised about the Yobi bit here. But beside this, it's still in the middle field and spring is in the middle field as well just to keep an eye on it here. Right. So also what we have here is I did the load test for what I mentioned earlier. And if we look into the figures of the screenshots of the console load tests I did from my local machine via DSL connection to a the implementation running in the Docker container on a Linux instance. And we can see here that spring is essentially returning let me get the numbers straight. It's returning 25. Okay. And 957 requests fail, which is interesting. The same thing we see on the Linux, so essentially a more beefy connection to which we are on the same data center. And we see that it takes definitely over 1200 milliseconds to complete, which is rather slow, but I didn't optimize this as I said, it's a blunt implementation. And yeah, so we see that it's not that reliable in this case without us doing a lot of things. And I should change the slides on here as well. Here we go. This is the Kato implementation. This already looks a bit better, slightly. So we do get back 93. Okay. And 97 failed. And on from lino to lino instance, it's quote unquote only 77% failing. For my chrono it's improving compared to Kato already. And especially compared to spring. So we get 197. Okay. Back and from local to the Linux instance and from lino to lino. And I'm not getting sponsored for by the word. Sorry. It's 267. Okay. Which brings us to 77% failure rate. And with Yobi surprise, this is really, this was definitely a surprise. So we have the success rate or the failure rate from lino to lino goes down to 38%. So we get more successful responses back than errors. Again, this is not optimized whatsoever. It's just running in a Docker container getting hit after a few warm up requests. So what does this mean in the end? So this means that spring is not necessarily so bad as many people want to see it. And it might be just fine. But if you start a new project, I would definitely recommend to look into Kato or Yobi because Kato because of its well, Kotlin rules. And because there's a lot of work going on, especially on the optimization part of self optimization. And Yobi because it's, it's really, it's a nice one. And I would like to see this grow. And as you can see, it's, it's from the reliability bit. Yeah. So with spring, it gives us a good feature resource balance. And Kato solid, as I just mentioned, it's, it's definitely it's far away from things you might find with not mature and not production ready. Yes, it is production ready. It's continuously getting better. There's a huge community behind it and it growth grows and grows. And definitely have a look into this. And it's, it's fun to use micron out. It's if you need something more complete. So like, like the, the spring approach, but a bit faster, have a look at my criminal. It feels okay. Documentation looks a bit ugly, to be honest, but it's, it's designed, but the content looks okay and helped me out. And the annotation processing helps a lot. And lastly, Yobi's definitely promising. So as you saw, it's, it's, it's fast. It's a result. It gives us a lot of, of, what's the word? Flexibility. This brings us to the end and to the questions. So thanks for bearing with me. Thanks for going with me through this and shoot away. Yeah, I think we okay. So hi everyone. Thank you very much for being as live. I want to say thank you to Olga for the amazing talk. Also, I catch the opportunity to thank you for doing a lot of Q&A's with other speakers today. And but now it's time for questions for your talk. And we have quite a lot. So one came from the last minute and it's from Nicholas. Yes. And he asks, why didn't you evaluate Quarkus? Really, just, I had to limit myself. And I didn't manage to squeeze Quarkus in. I have Quarkus on the radar as well. And I will take this opportunity to, to improve the talk on this one as well, or have a single talk on this. Yeah, this I can and say about, I can say about it. I haven't used it yet. I know it's around. And I know I've got some traction. So yeah, there was nothing, nothing, the theories behind it. Okay, so the next question, that's from me. I have to admit that I played a little bit with Spring and so on. But I saw that there is this tool on on Start Spring I O that allows you to sort of like bootstrap a project. And there is the opportunity to be coupling. Have you ever tried it? What was your experience? Do you think it's valuable for people that want to onboard? Yes. So essentially, you have three easy ways to start. It's either start Spring I O, or it's the, the, it's in the Spring CLI you can install. It's essentially a command line interface to the same things. Start Spring I O gives you and you can use IntelliJ, the, the Spring start. I'm not sure about the clips. I'm pretty sure they use a similar thing. And it all gives you the same essentially. So it's, it's, it's growing. If I compare it to early times with Spring, and you can now configure nearly everything. And you can decide if you want to have a, as a zip file, or if you just want to have a, as a plane directory when you use the CLI. And you can choose between Gradle or Maven. You can then if you use Gradle, you can choose between Gradle, between KTS and, and, and Ruby is also normal Gradle build files. It's, it's really easy to, to kick off there. And from, from my experience, just, just as a, how I approach it, I usually go with the minimum setup, which is either a web project or a REST project, or a REST interface project. And then just add the things I need afterwards. So step by step, because you get quite a lot of dependencies in, which is not bad, but you don't always need them. So, but yeah, it's, it's a pick and choose. It's definitely worth using. And it's, it's making things much easier. Awesome. So we'll give it a try. Yeah, then there has been a lot of interesting discussion going on, on, on the channel actually invite people to join the talk, private talk in some minutes, if you want to even chat directly with Olga. I should have, I should have given, I should have given a trigger warning to Nicholas. I forgot about this. Sorry, Nicholas. We picked some, some messages that got upvoted. And one is about pros and cons to what you talked about, if you're using GraalVM. Can you, can you talk a little bit in depth about that? I can, I can tell you, I didn't use it yet. I, I saw a lot of also things around GraalVM. I never used it so far. And I might be wrong on this one, as I said, I haven't used it yet. But last time I looked a bit into how it was like, Spring isn't completely capable of running in GraalVM. It might be wrong by now or there might be a cookbook, cookbook somewhere describing what you have to do to use it. But I think especially the whole reflection part, Spring makes, what makes Spring so magical and easy to use was not quite comfortable with Graal. And I'm happy to take the opposite things. And with some things, I would be highly interested in this, but I haven't, yeah, this all I can say to Graal. Okay. So there was also a bit of discussion around the Springfoo. And it like, again, what do you, what can you say about it? So Springfoo, thanks a lot for, for bringing this up. Yes, it is, it is more a incubator or sandbox environment. And then so new Spring things grow in there. And there is quite a lot of work going on. I remember a talk in late 19, I think I've seen where they already worked on a more diesel-like approach for the, for the whole web routing. I know with one of the newer features, newer versions of Spring Security, we get a more Kotlin-like diesel for Spring Security, finally, which will make, well, formatting the security configuration much easier. So it's actually, we can actually read it. So yes, it is definitely worth looking into this one. So thanks for, for clarification on this one, Nicolas. Yeah. Yeah. And then the last point was about the actuator. So, yeah, added to the list of pros in Spring Boot. Would you add it to the list of pros? Yes, no, why? Yeah, it's a definite yes, no. So yes, it is definitely, it's one of my favorites as well. Just to give a quick intro, because we had this discussion shortly before this year. So what actuator is, it gives you a really nice, easy way of plugging in a module which enables a lot of monitoring and metrics in your code. So you get a, things like a health interface, you get a ping interface, you get some, some, some basic metrics around how many requests come in, and it gives you a framework on what, how to easy add your own metrics, which we also use in a couple of projects before, or sorry, in a couple of projects before I worked with, where we put in some, some business metrics as well to just have them handy and have them, then request by other things. And yes, it's really easy to plug in with Spring Boot 2, the actuator project, switch to micrometer, which is now more or less the, the implementation for doing metrics. And with this, it's usually easy to use, to have the same thing in all the other frameworks. So there is for K2O4, I think Yobi even also have a ready to use layer for it. MicroNaut definitely relies on it as well. So it's everywhere, it's micrometer. So yes, it is good. And yes, it should be a pro, or could be a pro on the list. On the other hand, you can have the same thing with maybe a little bit more of overhead of implementing it, a really, really small impact if you use any other project.
Spring Framework helped us through dark times and is still a very active and helpful project. But is it the only option for doing any kind of web projects? There are many new frameworks around and they have interesting approaches. Especially when we use Kotlin, we might get better Developer Experiences and much better performance results. This talk is about showing you some interesting alternatives and hopefully helps you with finding a good fit for your next project Let's have a look at a real life borrowed REST endpoint, written in Kotlin (dahhh) using Spring, Ktor, Micronaut and Jooby. We'll compare the efforts, timings and developer experience and collect pros and cons for that next project planning.
10.5446/53664 (DOI)
Hello everyone. First of all, thanks to the organizers of this deaf room for accepting this talk on the European environment for scientific software installations, or EZ for short. My name is Bob Droghe. I work at the high performance computing team at the Center for Information Technology, which is the central IT institute at the University of Groningen in the Netherlands. I'm mostly doing HPC user support and training and a little bit of HPC system administration. And software installation is a large component of this work, which take up a lot of time. And I'm also currently involved in two larger projects, like the Euclid Space Mission, which is from the European Space Agency, where I'm doing some HPC infrastructure related work. And the other one is EZ, which I will talk about today. So I'm going to tell everything about the EZ project, which is a free and open source software project, a little bit about the current status and do a live demo of what we currently have and finally conclude with some future work that we're working on. But first in the nutshell, the European environment for scientific software installation, or again EZ for short. It's a collaborative project between many different HPC partners, both from academia and from industry. And we have a common goal to build a large stack of scientific software that performs well on different kind of systems. So most of us are from HPC, but we also want to make this work on, for instance, cloud instances and workstations. Initially the project was started by a bunch of Dutch universities and by Dell Technologies and the University of Cambridge. After meeting it was organized by Dell to exchange knowledge and experiences. And since then, lots of other new people and partners have joined the project. And we'll see lots of them on this slide, especially the most active partners in the project at the moment. But we have new people joining almost daily now. The main motivation for the project was that most of the HPC centers of these Dutch universities who started the project were struggling with the same kind of issue, which is that installing software takes up an increasing amount of time, mainly because of increases in different aspects. So we have more users from more different backgrounds also nowadays. There's lots of more software available, especially in fields like bioinformatics. There's lots of new hardware, new infrastructures like cloud, also new accelerators, for instance, and types of CPUs. But on the other hand, the available manpower often doesn't really keep up with all these increases and it's becoming a challenge to run and install all the scientific software on all these different systems and different kinds of hardware. There are two important remarks here about installing scientific software. First it's often not that trivial to do it, which is more or less literally illustrated by all these comics about the topic and also by the quite hilarious talk that Kenneth Foster gave a few years ago and fussed them. So he's actually involved also in the project and the lead developer of a tool, EasyBuild, which is a software installation framework that partly solves one of the issues on the previous slides, just like another installation framework like SPEC. They both can install lots of scientific software for you. So it does partly solve the issue, but there are still lots of other issues that are mentioned on the previous slide that they don't solve, especially in terms of supporting all this different hardware in a more automated way. The other remark is that it's important to optimize your software for the system you're going to run it on. The plot on this slide shows what happens if you take Gromax, which is a popular molecular dynamic tool, and what happens if you optimize it properly for the system you're running on and compare it to a non-optimized version. So you can get performance differences up to 70%, which means that you can save lots of expensive resources if you properly optimize your software. But since lots of HPC clusters nowadays have a large mix of different architectures and accelerators, this gets even more complex. So we hope to solve lots of the issues that are mentioned with the Easy Project. So our goal is to build a large share repository of scientific software installations and do that in a collaborative way to prevent each of us from redoing the same work all over again. This also brings in some other advantages. For instance, the more sites that are going to use this repository, the easier it becomes for users to switch between different sites, since the software will then be organized and provided in the same way on all those different clusters. We also want to make it possible to use the software stack on different types of machines and operating systems. So not only on HPC clusters, but also on clouds and workstations, laptops, et cetera. Not only on one specific Linux distribution or maybe a few, but in principle, we want to support any Linux distribution and also Mac OS and even Windows by using the Windows subsystem for Linux. In terms of hardware, we want to support different kinds of CPU architectures, of course special interconnects like in VindyBand, which you often have on HPC clusters, and obviously also GPUs, which are very popular nowadays. Finally, our main focus will be on performance, since that's important that you have a properly optimized version for each different type of hardware, but also on automation, so that we make it easy to add new software to the stack for all these different architectures. And finally on testing to make sure that everything we do and install is done properly. So we have quite ambitious goals, but we are confident that we can achieve these. And as I will demonstrate later on, we already have a working pilot repository that can do lots of these things mentioned here at this party. The whole concept of our project is not completely new. Our project is largely inspired by what Compute Canada has already done. So they have already built a software stack for all their national clusters in Canada and for a bunch of smaller ones. And we're actually in close touch with the people who built that stack, and we often get lots of valuable input from them, so we can learn from their experiences. If you want to know more details about this project, you can look at the paper or the presentation linked here, which are given by Maxime and his colleagues from the Compute Canada team. But just like that project, our project is also based on three different layers. So assuming we're running on some kind of client machine, then the client here provides the host operating system, which can be Linux or for instance, Mac OS. And basically the host only needs to provide the drivers and maybe tools like the resource manager like Slurm. But then we add three layers on top of that. The first one is the file system layer, which actually distributes our software stack to that particular client machine. On top of that, we add a compatibility layer, which basically provides an operating system layer, which levels the ground between the different clients. So it doesn't really matter what kind of operating system you're running anymore, because we provided ourselves. And then there's the software layer that contains the actual applications and some of their dependencies. For most of these layers, we make use of free and open source software projects. So in the same colors, you will see the components that we use for those layers that I listed on the previous slide. So EC itself is also free and open source software. And besides the project for those different layers, we have some more software that we depend on, for instance, for automation and testing and spinning up cloud instances and clusters. So I won't go into all the details for all these different projects that will take too long. But I will go through the different layers where I will say some more about those details. First there's the file system layer that's based on CERN VMFS or CVMFS for short, which is a software distribution service. And there are quite a lot of technical details here, which I will not go into again. But in short, CVMFS consists of different layers with replicas and caches. And then clients, which can be a personal workstation or HPC cluster or cloud, can then either directly connect to one of our servers over here, or they can add their own cache layers in between to get better performance. So the key message for CVMFS is that it provides a reliable and scalable software distribution service, scalable because you can easily add more caches if you want to. And so if you add more machines, you can easily add more caches if necessary. It distributes the software via HTTP, which makes it verifiable friendly. And well, most importantly here is that it allows us to make our software stack available any client we want to. Then we have the compatibility layer. Here we use Gen2 prefix to basically do a kind of Linux installation from source in a non-standard location. That's called the prefix. In our case, the prefix is a path in our CVMFS repository, as you can see here at the bottom. And we use it to install the quite low-level system tools and libraries like G-Lib C. But we try to keep this layer very minimal, so only the things we really need for our software to be built. All the tools and libraries in this layer don't really need to be heavily optimized. It's not compute intensive. So we just have one prefix installation per family of processors. So basically one for x8664, one for arm64, et cetera. As you can also see in the paths over here. Then there's the software layer, which actually provides the real applications, the scientific applications and libraries. And these are optimized towards specific micro-architectures. So for instance, we have a tree for Intel Haswell, one for AMD, Roam, et cetera. So any micro-architecture that we want to support. All these installations may only depend on libraries or applications that are also in the software layer, or if it's, for instance, an operating system library, it should depend on the one from the compatibility layer to ensure that the software doesn't depend on any kind of host library since we don't know what kind of operating systems they will run. We use three main tools in this layer, all free and open source, easy build, L-MOD, and ArchSpec. So first we use each build to build all the different scientific applications for the different micro-architectures. And then easy build also make sure that it generates the module file that we need for L-MOD. And L-MOD, that's the environment modules tool that we use, which makes it easily possible to offer multiple versions of the same applications to users. And provides them as some kind of plug-a-mop modules that users can then load and start using. And finally ArchSpec, so we install the software to different sub-trees, one for each micro-architecture. And when someone wants to use a repository, then we use ArchSpec to detect what kind of micro-architecture their host machine uses. And based on that, it will automatically pick the right sub-tree with the scientific applications for that particular machine. So we use ArchSpec basically to tell L-MOD where to find the right software installations. But it sounds like a lot of magic, but we actually have a working pilot repository already that can be used by anyone. So we have lots of Ansible playbooks to, for instance, deploy all the different layers and all the different infrastructure for our pilot setup. And also scripts and documentations, which you can all find on our GitHub page. We also have an initial setup for CVMFS with servers in Groningen and Oslo at the moment. Not very large yet, but it allows us to do all the tests that we want to do. We currently support in our compatibility layer X8664 and ARM64 and only Linux clients, but that means in principle also Windows if you use the Windows subsystem for Linux. And in terms of software, we just have a few applications in our pilot repository at the moment, mainly because we're focusing on testing and setting up the overall structure and solving issues and automating tasks and things like that. But when everything is working properly, then we can easily add more scientific software later on. It should be very straightforward. And for the same reason, we also have just a limited number of microarchitectures at the moment, but we can also extend that later on, at least for the non-exotic microarchitectures. So this page that you will see on the slide over here describes how to use our pilot repository, but that's basically a three-step process. First you have to access the repository itself, and for that you need to use the CVMFS client, which you can either install natively, but then you need root privileges, or you can use Singularity if that's available on your system. They don't need root privileges. So once you've accessed to the repository, you can source an init script that we provide that will completely set up your environment, detect what kind of microarchitecture you have, and it will find the right software tree for your machine, and then you can start loading the modules that you want to use and start computing. So a little bit more details about that first step. So if you want to use the native installation, this shows an example how you can do that. It's quite simple. You basically just have to install a few packages, the CVMFS client itself, configuration package that we provide. You have to make a local machine-specific configuration file, then run the setup command, which will prepare the mount, and you can access the repository. If you want to do it with Singularity, this is basically how it will look like. So it will take a container from Docker Hub, which we provide for you. You can just take it with Singularity. You have to set up some special options with the minus-minus-fuse mount flag that will actually mount the repository for you. Basically all you have to provide is some directory on the host, which will be used to store the cache of CVMFS. By default, we use less TMP, but you could change that. But in principle, you can just run this, and then it will launch the container where you have access to the repository. The next step is sourcing the init script, as I mentioned, which will print out some values about the machine you're using. So in this case, it says it's an Intel Haswell machine, so it will basically set up LMOT to use the software from the Intel Haswell repository that we provide. And finally, you can start using the software. So you can look for modules. You can load, in this case, Gromax, and then you can start running it. So I can show you how that works in practice. So I can do this on many different systems, and I will use some files from our demo repository for running. In this case, I will try to run Gromax on some different systems. So I will switch to my terminal. So I have three tabs open here. The first one is on AWS, an Amazon Cloud VM. And as you can see, that's an ARM64 processor. So it's a completely empty VM. The only thing I've installed is Git, and I did a Git clone of our repository over here. And what I'm going to do is first install the native CVMFS client by using this script over here that I've already prepared, but that will basically do the things from the slides that you've seen before. So I'm going to run this, which might take a little bit of time. So in the meantime, I'll switch to the second system as well. This is actually on the HPC cluster of the University of Groningen. So this time, I'm running on the X8664 machine, which is an Intel Broadwell architecture. And in this case, I'm going to use from that same directory script that will launch a Singularity container. So it's a similar kind of thing that you saw on the slides. And if I run that, you might see some warnings and errors over here, which we can ignore for now. But this will give me access, as you can see, to our pilot repository, where we currently have two versions. Let's go back to this one. So now that it has installed CVMFS on the AWS instance, I can also check it here. And I also have access now to the same repository. So the next thing that I will have to do is source the init script. So I'm going to provide the path to the init script, which is in the init directory, and the script is called bash. So it will detect the easy repository. It will say, this is an ARM64 Graphiton 2 processor. So I'm going to use the software from the ARM64 Graphiton 2 software tree. And now it's all set up. So I can do module avail to show which modules are available. And I can just browse through them. So I will run a demo now with Gromax, which is a molecular dynamics tool for which we also provide a script over here. So I can show you what it does. So it will load the module. It will download some input file for the benchmark that we're going to use, which is a praise benchmark which you can find on the praise website. And then just run the benchmark. So we'll start downloading, and then it will start Gromax. This is machine with two cores. So we will see that it's using two threads. And that will take a bit of time. So in the meantime, I can do the same thing on our HPC cluster inside the singularity container. The software has to source the init script. And then I will have to go to the same directory. And I can run Gromax from here as well. So this will run with 28 threads because it's a 28 core machine. So that will go a bit faster than the other one, which is still running. And in the meantime, I will open my third tab. This is actually the WSL layer on Windows, which means that I have a Ubuntu installation here in my Windows 10 installation. So this is actually my laptop now running with this Intel processor. So we'll do the same thing. So I'm actually running in a Ubuntu shell right now, where I have already installed the CVMFS clients. You can just do it in a similar way as on a real Ubuntu system. So right now I'm going to just source the init script here as well. And as you can see, it will detect this to be an Intel Haswell. I have a slightly newer architecture than that, but because we don't have an exact match for this micro architecture, it will fall back to the best one that we have in a repository, which is the Intel Haswell one. And now I can run the same benchmark over here. I also have a checkout of the same repository over here with the same test scripts. So I'm going to run the same benchmark also on my laptop. So now I'm running basically Linux software in my Windows machine using the Windows subsystem for Linux with the CVMFS mount of our repository. As you can see, the one on Amazon is still running because it only has two cores. The one on the HPC cluster is already done. But in the meantime, I will just switch back to the slides. Well, it's still running so I can check back later. But the nice thing that it shows is that I can easily now run the same piece of hardware on different kind of hardware, different kind of operating systems, and it will all do that by itself. It will detect which machine this is and what kind of OS and what the best match is. So we've already made quite a lot of progress over the last couple of months and we now have this working pilot version that I just showed you. But there's still a lot of things that we want and need to work on, which we are doing by making monthly revisions of our pilot. And that allows us to easily test things and identify issues. And then we solve those and later revisions of our pilot. For now and for the short term, our main focus is on adding more automation to a repository with tools like Ansible and Terraform and cluster in the cloud. And ideally, we want to make it possible that, for instance, if someone wants to add software to a repository, that he or she can open the pull request on GitHub with a request for a particular software to be added, that we review it and when we approve it, that it will automatically start building that software for different architectures. By for instance, using cloud instances. And when that succeeds and passes some tests that it will automatically be added to a repository. We also want to add a lot more testing to the repository, for instance, by leveraging tools like reframe and GitHub actions. And then we want to verify that things are okay with the entire installation of the repository, but also do performance tests of software that we've installed in scalability tests. We're also currently working on support for Mac OS clients for power based systems of power eight and power nine CPUs. And obviously for GPUs, which we want to do soon. And of course, we want to add a lot more software at some point. We also hope that scientific software developers will see the value of this project and will be willing to help us out with validating the installations of their software in our repository so that both they and we can be sure that the users of that software are using a correctly installed version. And we want to find some more dedicated manpower and funding to make project more sustainable since until now, most of us are doing this more or less on the side. We're also setting up a consortium at the moment. And at the same time, we want to change our name, the European in our name, we often get questions about that. And we don't want to limit it to Europe only. We want to change that e to something else. And finally, most importantly, we of course want to work towards a production setup so that we can all start using this in our systems. So if you want to find out more, you can check out our website or documentation, GitHub pages and Twitter. On our website, you can also find the form that you can fill in to get a request to invite and get the invite to our select channel and our mailing list. And then you can also join our monthly meetings, which are every first Thursday of the month. So with that, I'd like to thank you very much for watching this talk. And if you have any questions, I'll be around now to answer questions. And of course, you can otherwise contact us in Slack. Thank you.
The European Environment for Scientific Software Installations (EESSI, pronounced as “easy”) is a collaboration between different HPC sites and industry partners, with the common goal to set up a shared repository of scientific software installations that can be used on a variety of systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it is a full-size HPC cluster, a cloud environment or a personal workstation. The concept of the EESSI project was inspired by the Compute Canada software stack, and consists of three main layers: - a filesystem layer leveraging the established CernVM-FS technology, to globally distribute the EESSI software stack; - a compatibility layer using Gentoo Prefix, to ensure compatibility with different client operating systems (different Linux distributions, macOS, Windows Subsystem for Linux); - a software layer, hosting optimized installations of scientific software along with required dependencies, which were built for different processor architectures, and where archspec, EasyBuild and Lmod are leveraged. We use Ansible for automating the deployment of the EESSI software stack. Terraform is used for creating cloud instances which are used for development, building software, and testing. We also employ ReFrame for testing the different layers of the EESSI project, and the provided installations of scientific software applications. Finally, we use Singularity containers for having clean software build environments and for providing easy access to our software stack, for instance on machines without a native CernVM-FS client. In this talk, we will present how the EESSI project grew out of a need for more collaboration to tackle the challenges in the changing landscape of scientific software and HPC system architectures. The project structure will be explained in more detail, covering the motivation for the layered approach and the choice of tools, as well as the lessons learned from the work done by Compute Canada. The goals we have in mind and how we plan to achieve them going forward will be outlined. Finally, we will demonstrate the current pilot version of the project, and give you a feeling of the potential impact.
10.5446/53669 (DOI)
Hello everyone, my name is Abhinav Bhateli. I'm an assistant professor at the University of Maryland and I also lead the Parallel Software and Systems Group here. Today I'm going to talk about analyzing performance profiles using Hatchet. Hatchet is an open source collaborative project. They started at Lawrence Livermore when I was working there until 2019 and over time we have had contributions from many people, in particular students at UC Davis, Tennessee, and Arizona. So let's start with talking about why did we start working on Hatchet. Understanding performance bottlenecks is critical to optimizing software, both sequential and parallel. There are many profiling and tracing tools that can help you identify parts of the code that consume the most time. Now oftentimes you have functions that are called from many places in the code and you would like to understand when a function A is called from one place versus some other place, how much time do you spend in those functions. Now this requires a reasonable understanding of the code structure and it requires attributing performance or execution time to different calling contexts. Now fortunately we have many sophisticated profilers that can attribute time to calling contexts. So we can see where a function was called from and when a function was called from a specific context, how much time did we spend in it. Some examples of measurement tools that provide such information are HPC toolkit, Calipers, CoreP and other things that I've listed here. There are also similar tools for Python codes such as C profile and Pi instrument. So what information do we get when we use one of these tools to profile our programs? So one of the common representations of information that's collected from such profiling tools is called a calling context tree. In such trees you get information about what function calls which other functions. So for example, foo calls Waldo and then Waldo calls Garpley. For each of these nodes in this calling context tree, there might be contextual information that is recorded by the tool. So for example, the file in line number where the sample was generated, the function name, call path, load module and so on. If you were running a multi-process or a multi-threaded program, you might also have the process ID or the thread ID where the samples were generated. In addition, for each sample, you might have performance metrics that are associated with these nodes. So for example, you might have the time spent in the function, other hardware counters such as floating point operations, caching assist and so on. Now sometimes tools do not record the specific time spent in functions based on where they were called from. So you might have a scenario where I know the total time spent in Groud, but I cannot divide that by the time spent when it was called from Corgi versus Barr in the graph on the right. So in these cases, the calling context tree degenerates into a call graph, which is what's shown on the right, and some tools generate call graphs. Most of the time measurement tools also have analysis tools that come with them. The limitations of such analysis tools are that they typically only support their own unique formats. So you could not use one analysis tool to visualize data or analyze data from another tool. They also have limited support for saving or automating the analysis. In this, in the case, you want to do the same analysis again and again. Many tools only support viewing one dataset at a time. So if you wanted to compare two different executions or runs in what is often called multi-run analysis, you would have to put two windows next to each other and then manually compare them. More importantly, they also lack capabilities to subselect and focus on specific parts. And overall, they do not enable programmatic analysis of the data. So if a user wanted to do scripting to analyze the data themselves, that's most of the time difficult in such tools. And that's where we started thinking of developing a new tool, and which is called Hatchet. So Hatchet is a Python-based library, and it enables doing programmatic analysis on performance data. It creates an in-memory representation of the hierarchical graph or hierarchical performance data you have, and it leverages pandas, which is a Python library, for storing all of the contextual and numerical information. Panda supports multi-dimensional table or datasets, and we use our graph as a structured index to index pandas data frames. I'll talk about these details in a later slide. Once we have the data structures in place in memory, we have defined a set of operators in Hatchet that help us subselect or aggregate profile data, and a set of operators that help us compare multiple datasets for multi-rand analysis. So before we talk about Hatchet in more detail, let's do a quick primer on pandas and data frames. So pandas is an open source Python library for data analysis. Pandas provides two data structures, a series and a data frame. We use data frames in Hatchet. A data frame is a two-dimensional tabular structure, supports many operations that are borrowed from SQL databases. So on the right, you see an example data frame. It has rows and columns. Each row has an index. This index is flexible, so you can define your own indices for your dataset. And pandas also supports multi-index, which enables working with higher dimensional data in a 2D data structure, and we'll see how we use that in Hatchet. So let's start with talking about the data structures in Hatchet. So we have a canonical data model, which we call a structured index. It's basically an in-memory graph. Each node in the calling context tree is assigned a unique key here, which enables the nodes to be used as an index into the data frame. Each node is a frame that describes the code it represents. So for example, if you look at the figure on the right, the root node has a name foo and is a type function foo then calls some other nodes on the right child is a node of type loop. And there's a file in line number associated with it. That node then calls some particular statement in the same file on a different line number. So frames don't have a rigid schema. The different nodes can have different types. They can be of type function or loop or statement or module and so on. The nodes in our graph or tree define the structure and connectivity for our data. We use the structured index to define the graph object and then we have a corresponding pandas data frame. The graph in this case stores the color-coli relationships. So this is a sample graph showing the main calls physics and solvers, solvers and calls hyper and MPI and so on. The data frame that corresponds to this graph stores all the numerical and categorical data. So for example, main has a row in this data frame and main is used as an index to that particular row. Similarly, PSM2 has a different row and it points to a different row in the data frame. Now oftentimes when people are doing parallel performance analysis, you would have metrics that are collected per MPI process or per thread. And in this case, we use the multi-index support in pandas to create a hierarchical index. So if you see the columns on the left that are node and rank, for each node, you have data per rank and you can have times and other metrics for individual ranks for a particular node. Since we have direct references to graph nodes in the data frame, this can be risky in particular when the graph nodes are shared by multiple graph frames. So if you're using the same graph to index into multiple data frames, you would want to make sure that unintended changes do not happen. So we ensure immutable semantics for graph nodes. What that means is any operation that modifies the graph nodes in place creates a new graph frame and a new graph index. This is implemented using copy and write semantics. So this ensures that anytime you're making changes to the graph structure, we create a new graph frame with a new graph and a data frame. So let's start talking about how do you use Hatchet. Hatchet is open source. It's available on GitHub at this URL, github.com You can either install it from source or use pip using pip install Hatchet. We provide many different data formats, HPC toolkit, caliper, Gprof, pine, strome, and cprofile. We also provide a simple string literal format if you wanted to just play around with synthetic data and generate that yourself. We are in progress. We are adding new readers for Timmery, Tau, and Q, and those should be merged into the Hatchet repository soon. I'll talk about how you start reading data into Hatchet, but if you have a data set for which we do not have a reader yet, we welcome contributions to Hatchet. So please talk to us and feel free to add a PR for a new reader. So what I'm going to do is instead of just having slides and showing code, I'm going to use a Jupyter notebook to illustrate various features in Hatchet. So let's start with the most common operation that people use in Hatchet, which we call a filter operation. So I've created a simple synthetic graph here using the string literal format. So we have a from literal function in Hatchet to use this to input this data and then create a graph frame object. Once you have a graph frame object, you can create a tree representation which can be printed on the terminal. And this looks like this. This is a graph we saw in the slides, MAME calls physics and solvers, and then they call some other functions. We can also display the corresponding data frame associated with this graph frame. And that looks like this. It has eight rows and each node has some metrics associated with it. So what does a filter operation look like? It's shown here in a single line. So you provide a lambda function that provides the operation that is applied to the rows of the data frame. In this case, what we are saying is we want to filter all the rows by looking at the time column of the data frame, which is this column and only looking at only keeping nodes that have a time greater than 10 units, 10 seconds, let's say. Once you do that and you want to display the filtered data frame, you will see that the nodes that had less than 10 seconds, so the MPI node and the solvers node are now gone and you are left with five nodes now. Now notice that I had a squash equals false here. When you do that, we only filter the data frame and make no changes to the graph. Squashing takes the graph associated with this data frame and removes the function nodes that have been filtered out and does a reviring of the graph with the remaining nodes. Squash is on by default. So if you do not provide this argument and if you do a squash operation, this is what the corresponding squash graph looks like over here. And if you provide the squash equals false, you can do this in two steps. First do a filter and then do a squash to get the same tree. Just to give another example, you could provide a different filter that says I want to look at all the nodes where the name matches main or PSM2 and then when you print the squash tree, you get something like this. The next operation we'll look at is so in addition to the basic filtering capabilities we provide, we also provide a query language for doing filtering. Now this enables us to specify call path patterns in the calling context tree and use those to do the filtering. For example, you could say that I want to look at all the nodes where the name matches MPI and I want to get all the child nodes that are underneath these matched nodes. We provide this query to the filter. It's going to look at all the nodes that are MPI in this case, this one and this one. And then also return the nodes that match star here. So you get a tree that looks like this after the squashing. So those are two different ways of doing filtering. You can either use a basic filter by providing a lambda function or you can use a call path query and specify a call path pattern to do more advanced filtering on the calling context tree. Now let's start looking at more advanced operations where we have, let's say we have multiple datasets and you want to compare them. So here I've created two synthetic datasets and we're calling them graph frame one and graph frame two. And these are the two trees. The trees are exactly similar in structure. Now the only difference is the time spent in different functions. Now let's say I wanted to see when I run these two executions, where do I spend more or less time in different nodes? So I can do a subtract operation in this case. Gf2 equals minus equals gf1 and then I can print the resulting tree. What this shows now is in all these nodes, what was the difference in time? So what we see is that in PSM, I spent 25 units here and 35 units here and that turns out to be 10 units in PSM too. In the earlier PSM, I spent 15 units here and 30 units here and so I spent 15 units here. This can help us identify when I do do different executions, where do I spend, where does the increase or decrease in time come from? And we'll see more examples of these in more advanced examples. Now oftentimes people want to, people are familiar with Gprof and they want to do a simple analysis of a, to create a flat profile. So that is very simple when you use hatchet. So let's say you have loaded a data set into hatchet and this was generated using HPC toolkit. You can then say that I want to group by, do a group by on the data frame by the name column. What that does is it looks at the name for all the nodes and all the nodes with the same name. The numerical data for them is aggregated by the function provided here, in this case sum. And then if you print the data frame after doing a sorting on the time column, you can see what are the functions where I spend most of my time. So in this case, these are the top three functions where I spend most of the time in Kripke. You can also do a group by by other categorical columns. So you can say a group by by the load module, which will tell you what are the libraries where I spend most of my time. You can do a group by by the file and that will tell you what are the files in the code where I spend most of my time. The next analysis I will demonstrate is of looking at load imbalance. So let's say you are running your program on many cores, in this case, 512 cores. And you wanted to see what are the nodes where I see the maximum load imbalance in my code. So you want to see at a call site level, where does the loaded balance come from? So again, we load the data set into a graph frame. In this case, we make a copy of the graph frame. We then drop the MPI per process information by doing an aggregation. In the first graph frame, we get the average time or the main time spent in each function. And for the second graph frame, we do the aggregation using the max function. So now graph GF1 represents what is the average time spent in each node and graph GF2 represents the maximum time spent in each node in the call graph. When I do a div operation, I can now see and I divide the graph frame two's time column by the graph frame one's time column. It does a division of the max time over the mean time. And that tells me what is the imbalance that I see in different nodes. I can then display the sorted data frame by the imbalance column. And that will tell me that these are the top functions where I see the largest imbalance because it's showing me the maximum time over the mean time over all processes. So you can see that using a few lines of code, I can use Hatchet to look at load imbalance in my code and specifically at the level of each call site. So now let's again go back to analyzing multiple executions. So let's say I did a run of Lolaish on a single node on one core versus 27 cores. And I wanted to see where do I spend most of my time when I run on the larger number of cores. So I read in the data into two different graph frames, GF1 and GF2. I can print those two trees and they look something like this. These are larger trees than the toy trees we have seen. I can then do a subtract operation. So GF2, GF3 equals GF2 minus GF1 and then print the resulting tree. What the resulting tree will tell me is when I run on the larger number of cores, where do I spend more time? So even though in each individual tree, I spend most of the time in calc R glass, the increase in time is the largest in time increment and then in calc Q4 elements. Of course, if your tree is large, you don't need to print the tree itself. You can sort the data frame by the time column and then look at the top few rows of the data frame and you'll see the same results that it's time increment and calc Q4 elements where I see the largest differences. We can even do more advanced things with these subtract operations and combining them with filters. So for example, if I was running two instances of the Lulish code with MPI enabled on multiple nodes and I wanted to see when I run on 27 cores versus 512 cores, what are the MPI routines where I see the largest increase in runtime? So again, I load the data sets into two different graph frames, GF1 and GF2. I then squash the graph frames by doing a filtering operation and saying, I only want to look at the functions where the name starts with MPI. I then do a subtract operation on the squashed graph frames and then I print the graph frame, the data frame sorted by the time column. So this tells me what are the top MPI routines where I spend most of my time or in fact, where we see the largest increase in the time. So this can help us identify scaling problems and we can go and fix those functions where the largest increases are happening. You can do something similar with multiple data sets and not just two data sets. So this example is showing how we used Hatchet with a few lines of code to analyze eight different data sets. So we run Lulish from one through 512 cores and this part loads the data sets into memory, creates data graph frames for each of those. We then pivoted the data to be arranged by the number of processes I'm running on, the different functions we find in each execution and what are the times spent in these functions. I can then use matplotlib to create a plot that looks like this where I can see for each process count that's on the x-axis, how much time do I spend in different functions. This can allow me to understand as I run on more processes what functions are scaling poorly and which functions do I need to pay more attention to. Now notice that an analysis like this would have taken a significant amount of programming. If you want to do this in older tools with Hatchet, it enables this kind of programmatic analysis with very few lines of code. Going back to looking at other features in Hatchet, so we support visualizing small graphs. This should not be used ideally for analyzing your full call graph. It's huge, but once you've done subselections, filtering and squashing to look at a smaller subgraph, you can use the visualization capabilities in Hatchet. We've already looked at the terminal visualizations that are written out to STD out. You can also output to a graphfist.format. We output a.file that can then be displayed using graphfist. The other format we support is the flame graph format. Again, you can write out a file that may be read in by the flame graph software to generate a visualization that looks like this. So far, the operations I've talked about are not the exhaustive list of API operations supported by Hatchet. I refer you to look at the Hatchet documentation that is on readthedocs.io for the full user docs. Finally, I'll talk about some performance improvements we've been making to the Hatchet operations itself. So we realized that when we were looking at very large data sets that had hundreds of thousands of lines or hundreds of millions of lines in rows in the data frame, we were running significantly slower. There were two operations we identified. One was the read operation in the HPC toolkit reader that was significantly slow. And the other was the unify operation in the graph frame that we were using to unify multiple graphs. So we were looking at multiple graphs. So by doing significant optimizations such as using Scython, we have been able to improve the reading time for HPC toolkit from six hours to two minutes for 100 million rows in the data frame. And similarly for the unify operation, our runtime has gone down from two hours to 1.5 minutes. So we expect that even for large data sets, Hatchet should perform very well. Finally, in conclusion, with Hatchet, we are leveraging the power of pandas for performance analysis and HPC for analyzing hierarchical performance data. This enables us to use graph nodes to index a data frame. And finally, Hatchet helps us do programmatic analysis of such hierarchical data for one or multiple executions. In the future, we want to support other file formats and we encourage people to contribute readers to Hatchet. We are also planning to add an output format for Hatchet so you can save intermediate data as you're analyzing your input data sets. And then we are also planning to implement a higher level API that can help us automate some of the performance analysis that we have seen Hatchet users do. So for example, you might be automatically able to say, tell me if there's load imbalance in my code and Hatchet will have a function that does that for you. So here are some links. Hatchet is on GitHub and we have documentation on readthedocs.io. Again, Hatchet is a collaborative project with contributions from many people and the list of contributors is available on GitHub. Thank you. Okay, so I assume that we will be in the livestream shortly. So I think we have four questions. I'll just work them from top to bottom. So the first question is, if you have created a tool for the actual analysis of performance data that relies on being able to analyze a lot of the data from other tools, what changes or improvements would you recommend that be made to these profilers? So to make your analysis more accurate or detailed or to allow a simpler or easier analysis? Yes, I think that's from the point of view of Hatchet itself, we do not require any changes to the measurement tools. Of course, the more detailed your tool is or the more detailed data that your tool collects, the better information you get when you're trying to analyze it. So for example, some of the tools like Caliper have a annotation profile based measurement of the data. So you only get data at the level of regions that the user annotates in the code. Whereas other tools like HPC toolkit or Q might have a full calling context for all the functions. And HPC toolkit actually also gives you things such as the loop URL or the line number and so on. So the more accurate data you have, it's going to make your analysis more accurate. But from the point of view of Hatchet, we don't really require a certain level of granularity for how fine the data is. The thing that helps Hatchet is so some of these tools will generate separate profiles for each MPI rank or thread, whereas other tools unify the graphs from all MPI processes or threads. So if the graph is already unified by the tool, that makes it easier for Hatchet to import it. If not, then we have to do this extra step. And sometimes we might not know what the intent of the tool was when writing out the graph. So we are making our best guesses when doing these unifications. So extra annotations there would be beneficial. Yeah. Okay. All right. So the next question. So one of the main features is that you're able to read from all these tools into some canonical representation. Could you write it back out so that what you have as intermediate or canonical representation can be used eventually by the analysis tools from the other profilers? Yeah. So that's a good question. And there was a related question about do we have a file format into which we write out the, can we write out the data from Hatchet? So we currently have one, there is a PR that needs to be merged right now, which is on writing out the data that's input by Hatchet into an HDFI format. This is basically using Pandas' 2 HDFI function to write it out. We currently do not support writing out data, if data is collected by HPC toolkit, we don't support writing it out into Calibur, but I think this could be useful. If, for example, there are certain things available in the HPC toolkit GUI or the SCORPY GUI that we do not support, then I think writing it out to a different format so people can go and look at that might be useful. But we haven't done this yet and I think this would require some data wrangling because we'd have to make sure, I think like Todd was saying on the chat, since every tool writes data out somewhat differently, some tool might have functions and other tool might have regions. So we'll have to make sure that we write out the data and the format that the analysis tool expects. Okay, thanks. Third question, how do the subtract operations compare to SCORPY or CUBE or the SCALASCA tool change? Yeah, that's also an interesting question. I'll have to look at more details of what kinds of capabilities does SCORPY or the analysis parts of SCORPY and CUBE provide. I'm not aware of a Python based analysis that you can do using these tools. So what we are really providing is, as opposed to the user doing things in a GUI, you can write simple scripts or simple Python code on top of Hatchet to do such analysis. Since we work with Felix Wolf, we are aware of a paper that they had several years ago on doing some algebra on performance profiles, but I'm not sure how much of that is implemented in the current SCORPY tool chain. All right, then the final question, I think. We have one minute left, I believe. So can graphs be exported to adjacent graph formats? You mentioned exporting data, but... Yeah, so that's the one I said. We currently support an HDFI format, but I think if there is enough requests from people for other file formats, we are happy to add them. And you accept pull requests? Yes, definitely. Pull requests for new readers, for any new features that you want to add, they're always welcome. Okay. So I don't think there are any more questions. Okay, I'll just mention, I think Todd was saying this on the chat. We are currently working on a study of comparing the various tools so we can figure out what tools provide more...
Performance analysis is critical for identifying and eliminating bottlenecks in both serial and parallel programs. There are many profiling tools that can instrument serial and parallel codes, and gather performance data. However, analytics and visualization tools that are general, easy to use, and programmable are limited. Hatchet is an open-source Python library that can read profiling output of several tools, and enables the user to perform a variety of programmatic analyses on hierarchical performance profiles. Hatchet brings the power of modern data science tools such as pandas to bear on performance analysis. In this talk, we present a set of techniques and operations that build on the pandas data analysis library to enable analysis of performance profiles. These techniques, implemented in Hatchet, enable the filtering, aggregation, and pruning of structured data. In addition, Hatchet facilitates comparing performance profiles from multiple executions to understand the differences between them.
10.5446/53670 (DOI)
Hey there, my name is Christian Knieb. I'm a developer advocate for HPC at AWS. A while back I gave a talk about containers at FOSSTEM 2016. Today I would like to baseline everyone on HPC runtimes and engines so that we are all on the same page and have a common understanding about how to dissect the container ecosystem. This talk will firstly recap why containers are a good thing in general while explaining why the standard tech was not flying in HPC. Second, I will discuss why that is in particular. With that we are going to dissect the container ecosystem into palatable slices. Fourth, from change route all the way to modern HPC runtimes. In conclusion, I'll recap and touch on what is the next layer to be built on top of runtimes. When DOCAF was announced and released in 2013, it started the widespread adoption of containerization on developer laptops. Open research sites like NERSC or CSCS started to get requests to run containers on their HPC systems soon after. But with the DOCAF architecture and stack at the time, it was impossible to be applied to the shared environment HPC sites are running in. In the early days, running containers was used to run a service in isolation. Let us have a look at the monolithic stack from back then. We had a client running in the user context, communicating with the same binary running as a service in root context. The process running within the container is a child of the actual DOCA demon. Even worth, you were able to trick the kernel's POSIX access barrier. The container API allows to define the user within the container and bind mount directories. Without SELinux or App-a-More watching over file system access, it was a scary thing and it seems like Pandora's box for system admins. Also for traditional list, who stands was, why do we need this in the first place? We are fine with what we have. But let's be clear. This is the intended use of the API until today. In non-HPC land, that is fine. By keeping the container independent of the host and data access managed in a more cloudy fashion, not relying on the POSIX barrier, rather whether a volume is available in the first place, that's also fine. All of that did not hold back students and researchers knocking on doors to ask how to run containers in open research facilities. Thus, forks at NERSC, Barclay and LANL started experimenting. Hey, Charlotte Gleut, shifter and singularity. Before we dive deeper into the runtimes, let us first discuss the challenges we face in HPC or rather in shared systems through the lens of iterations of machine learning. As they were the first to pick up container tech. Okay, first we start with the simplest version. We use TensorFlow on a node in total isolation. The data is located in a volume only accessible by the container. We download it within the container from S3 even. We only access CPU, memory and network through the container. That is how containers were designed to work. Please keep this in mind. Total portability as we do not make assumptions about the underlying host. Build, ship and run everywhere is the slogan and the abstraction of the underlying host made sure it worked. But containers were just too attractive to not bundle up all the weird stacks used in more progressive and hard to control stacks like AI ML. Thus, they were quickly adopted to access GPUs. Access to the driver and some devices and off you went. NVIDIA was quick to jump all in and help by forking Docker as NVIDIA Docker. Which was hacky at first but over the years became a good citizen utilizing standards. Data was still isolated though. As we saw in my scary example earlier, the screaming began when you needed access to a shared POSIX file system without control over the UID of the process. By then you also encountered the first cracks in your understanding of the clear cut between user land, kernel space drivers. But you blew the lid off completely when you were dealing with distributed apps using kernel bypassing NICs, MPI and traditional HPC schedulers. MPI run needs to reach out to remote host, typically using SSH and we don't want SSH to run in all containers, right? PMI relaxes this a tad but you still need to initialize the MPI domain across all instances. To wire up all the processes, MPI communication does not like nutted networks typically used in container land and needs to communicate with other processes. All of that with the added complexity of MPI with its different variants, ABI compatibility and PMI. Today we as a community are pretty much able to address all those challenges. But back then skeptics wanted all or nothing. I'd say the scariest one was the POSIX issue because it was the easiest to understand. The rest was WD40 and duct tape. It was easy to find edge cases to poke holes and break the current setup. Some just threw in a security expert that stalled the process for sure. With the challenges in mind, let us dissect the ecosystem from the bottom up. The lowest layer is the runtime. Its sole purpose is to create an actual interactive process with some isolation, at least an isolated fire system view for the process. The easiest and earliest example of a runtime might be change root. What is needed? The root file system we can change into, and I will fetch an Alpine table here. To change root, we need to have privileges, which makes sense if you think about what you can do. You can create your own ETC pass VD, change the pseudo file. Change root is pretty much the keys to the kingdom. But still, it created a process within a new root file system. Fast forward to today. We can use the same root file system to spawn a container using RunC. We already got the root file system from the change root example. Only thing missing is the runtime spec. We will create one using RunC spec, and we are even creating a rootless container. No privileges needed anymore. In running the container, we have a similar look and feel as with change root. But the configuration created applies some sensible defaults. The user within the container is mapped to an unprivileged user outside of the container. Access to devices is filtered, syscalls are limited, and so forth. RunC is the first implementation of an OCI compliant runtime. Back when monolithic Docker stack was broken up into models for reuse. But RunC for sure is the plumbing. The user needs to come up with the configuration file. It won't deal with fetching images and extracting root file system for later use. That is why a layer above is needed. After exploring the runtime, let us look at the engine. It's taking care of all the setup. Assembling the config and dealing with downloading an image, optimized for distribution. After the image is downloaded, it will extract the root FS, what is called a snapshot, which a runtime will use. In the RunC example, we used Wget and tar to download the image and extract the snapshot and RunC spec to generate a configuration file. We could also use Docker to download an image, extract the snapshot, and copy the snapshot over to a local directory. In the traditional Docker stack, container D is used as an engine. And don't worry, we will have a look at other engines like Seros and Potman as well. Now let us have a look how this engine and runtime plays out in the Docker stack first. This architecture is pretty busy with all the interaction options and plugins. Let me reduce the noise so that we can focus on the important parts. We are using the Docker CLI to interact with Docker D. And I'm a bit over simplifying here. I'm ignoring the difference between content and image, ignoring the namespace service and the task service. But for this talk, we will go with only three pieces. Images, snapshots, and Run. Okay. In Dockerland, we use Docker pull to pull an OCI image from a registry. For pull, we reach out to the registry for the latest image available and check the local store if it is already present. If not, we download the image and keep it locally. Docker create does what we saw me doing in the example. Snapshot the image into a root file system and eventually create a configuration. Docker start takes the final step to launch the container using a runtime. And all of the three steps are done when you use Docker run, even though it won't reach out to the registry if the image name is already present. It just trusts the image. Now we follow the flow tracing the process of a Docker run command. The Docker client runs in the user context and reaches out to the Docker service, a system service. Docker D context the container D to supervise the actual container creation on the host. To allow for better life cycle management like logging and easier error handling, the shim is forked for each container. The shim uses run C to create a container, but only for exactly that. After the container process runs, run C will die and the container process is reparented to the shim. If we get back to our challenges overview, this is fine as long as we don't do not break too much of our abstraction. Next the GPUs is done via runtime called OCI hook. I'm going to dive deeper into OCI hooks later in the talk when we discuss Seros. Due to the use of the root context, we are still in scary territory with regards to shared POSIX file systems. Even though in cloudy environments, the issue is partly solved by running ephemeral clusters per user, getting rid of the shared in shared file systems. The distributed applications still have problems as the actual container process is not really a direct child of what the client fired up, along with the problems of network and inter-process communication I talked about earlier. MPI run, Docker run, just is not going to happen easily. Okay, I assume we all agree that container services like Docker do not cut it. We need to go rootless and keep everything in user land. Let's first have a look at Podman. Red Hat advertises it as a replacement for the Docker stack, as it implements the same CLI interface without a system service in the middle. Calling Podman run will directly fork a shim, same goal as a container dshim, better lifecycle and error and lock handling. This shim called container monitor uses RunC to create a container, which again dies afterwards to hand over the responsibility to Conman. The accelerator challenge is handled with OCI hooks. Since we are running in the user context, we cannot trick the kernel by posing as a different UID. The team around Podman also makes sure to play nice with MPI run, including logic within Podman. Let's look at the early HPC run times, which are alive and kicking to this day. Charlie Cloud, shifter and singularity. They were created back in the early days, and their focus was to leverage the packaging of containerized software in HPC environments. Shifters focus was to provide a working environment for their HPC site. They created a lifecycle, which allowed a user to request a Docker image to be pulled into the HPC site via an image gateway. And snapshotted this onto a shared file system for a user to use. The runtime was handcrafted and is going to be similar to what we discussed in this section. Let's look at Charlie Cloud. Charlie Cloud uses exec to start a process and reparent the process to the bash of the user. Since it is a user-land container, we do not have the issue of POSIX breakout, and it uses external scripts on the host to deal with HPC customization of the container. Using the mechanism of but not quite being OCI hooks. For singularity, familiar approaches are taken. The starter SUID will be the parent of the process. It is able to hook in GPUs and deal with MBI. Again, not quite using OCI hooks, but using similar approaches baked into singularity. Singularity also provides options to use namespaces like the pit namespace. I had a little fight over this with Greg from Singularity when I invited him to speak at Docker HQ back in the days. Are those early HPC runtimes real containers? I say when we got started, it was a stretch as most were just using the mount namespace to provide a dedicated file system. But gradually, more namespaces were added. Even if they are not true OCI runtimes, as they does not implement the full runtimes spec, they for sure have their place in the HPC community. Personally, I prefer the integrated approach of using standard container components and specifications as it leads to better adoption and interoperability. That brings us to my favorite runtime, as it is using standards and components everywhere. Serous. Serous directly uses RunC to start a container. Et voilà. No shim. Since Serous has the HPC focus, it does not need to compromise and worry about long-running services like Docker and potentially Podman, as it aims to appeal to non-HPC use cases. Serous uses the TrottenPars for GPU acceleration and OCI hook. Since it is a user-land container, no worries in terms of POSIX file system. Like the rest of the pack, pure OCI hookjoy when we get to the real HPC nitty-gritty. So let's dive into that a little bit more. Let's get started with a slide from the Serous team at CSCS. They gave this in 2020's high-performance container workshop. The actual lifecycle event got an update which deprecated pre-starts in favor of create runtime. If we follow the lifecycle from create to destroy container, we can see the different stages are used to manage the lifecycle of a different aspect. For example, a resource might need to check which libraries are present within the container file system. This can be checked at create runtime before the actual container process is started. Post start might be used to signal that the container process has just started and post stop can be used to free up resources for others to use. For reference, the different events defined in the runtime specification. Create runtime to customize the container after its namespaces are created. Create container to customize the container after the file system is set up. Start container is used just before the container is being started. For instance, to inspect the libraries used by the binary. Post start to send signals just after the process was forked and post stop to clean up or debug after the container is deleted. This is how the actual hooks are configured within the runtime specification. Pretty straightforward, I think. I'll pause for a bit to let you check the code. Looking at the hooks that CEROS already provides. The standard NVIDIA GPU hook to allow for GPU use. An API hook that ensures the correct replacement of MPI libraries from the host. G-Lip C to replace the container for the host one as the mounted MPI dependency might rely on the newer ones. An SSH hook to inject a key and an SSHD into the container. The slurm hook that waits for all processes to be started before executing the containerized application. And finally, the timestamp hook to assist debugging the container lifecycle. To conclude this breakout, some examples of hook configuration. The GPU hook might be configured like this on a given instance. I will pause a little bit so that you can grasp it. The MPI hook for Mvpage 2 with the condition via annotation. Again pause a little bit. Now some final thoughts on that. Hooks externalize runtime decisions. Configured on the actual host, the container is starting on. This means that the configuration is left to the system administrator and thus is transparent to the user. Otherwise we would need to deal with some of this within the container. We might wait for a condition, even a job-wide one, to be met before starting the container on each host. The beauty of this for the user is that he can trust runtime decisions are made on a host itself. And the system administrator is in charge of making sure certain assumptions are met. And there is certainly more things we can do with this hook. Fix an infrastructure need without a hack script. Fetch data before the container starts. Fix file permissions along the container lifecycle. I think there are much more things that we can think of. This concludes the exploration of the lower layers. Once we are all on the same page, even if that means we agree to disagree, we can look beyond the execution. Scheduling and workflow orchestration is going to be a fun one. Who is in charge of execution? Slurm cluster on top of Kubernetes. Kubernetes within Slurm. Airflow driving Slurm supervised by a Jupyter notebook. And as another threat, how do we build and distribute images? My humble opinion, spec becomes pretty appealing to create container images from the receipts we use to install natively. But we need to get smarter when it comes to distributing optimized images, taking the context of the node and job into account. Let's free ourselves from runtime decisions. Okay, that's all we have time for today. Let's recap this session. Firstly, I would like to encourage everyone to split the concerns in terms of runtime and engine. I had so many discussions over the past almost seven years in which those two concepts were mixed and the discussion went nowhere because everyone was talking about different aspects and prerequisites. That is the reason I'm happy with where we are at as a community. We are explicitly reusing common runtimes like RunC and not reinventing the wheel as we as a community were guilty of so many times. Secondly, please do not fall into the trap of confusing how we distribute images and how we snapshot and store them for immediate use. If you are a user at an HPC site, you might have to file a ticket to get an image from the interweb to become a snapshot for you to use. Similar to what shifter pioneered with the image gateway. But that does not mean that you should not care about how to share the OCI images across multiple sites. That would need to adhere to standards, I think. Lastly, please get out of your echo chamber and explore different options. It seems like we are all human and the runtime we are socialized with is the one we keep using no matter what. That is fine, but it does not hurt to explore and understand the foreign cons of each runtime in different contexts. By the time this goes live, hopefully I have the time to create this workshop you see on the slides. The goal is to provide an environment to explore all the runtimes out there using a common example code. And if you have questions, please reach out. A good place is the HPC container Slack community. But feel free to ping me an email to have a chat about the options and how you and your R can explore what is best for them. Thank you for taking the time to join me on this session.
The Container ecosystem spans from spawning a process into an isolated and constrained region of the kernel at bottom layer, building and distributing images just above to discussions on how to schedule a fleet of containers around the world at the very top. While the top layers get all the attention and buzz, this session will base-line the audiences' understanding of how to execute containers. The talk will briefly recap the history of containers and containers in HPC and describe the challenges we faced and how the community overcame them; eventually converging towards a sustainable model to run HPC application at scale using containers. By attending this sessions viewers will come to understand that HPC runtime engines are not far apart anymore and what to look out for when adopting HPC containers in 2021. We'll also poke at the next layer up - Image build and distribution - to describe the challenge after we picked the runtime engine.
10.5446/53675 (DOI)
Good day. My name is Alina Edwards and welcome to my talk on the lessons in programming model comparisons using OpenMP and CUDA for targeting GPUs. I worked with Ruben Boudiardja and Veronica Melissie Vergar at the Oak Ridge National Lab for the duration of about a year on this project. A little bit about me. I recently graduated from the University of Arkansas with a Bachelor of Science in Mathematics and Physics. I also did a minor in Computer Science. Also, I was born and raised in the island of Dominica. It's also known as the Nature Isle of the Caribbean because of its mostly untouched natural beauty. Now, in this presentation, we'll be looking at the motivation what drew us to doing this work as well as the code that we used and the hurdles that we faced porting from OpenMP Fortran to CUDA C. Then we'll look at our final results and then how we plan to use these results that we obtained for future work. So why do we do this work? Well, the first and main reason is for code portability. It is important to create portable applications in order to reach a wider audience and on multiple platforms. It also minimizes the need for users to change or adjust the code in order to get it running. Additionally, we hope to compare the performance between the CUDA and OpenMP API. These two API can be used to accelerate code on a GPU. Genesis, which is an acronym for the general astrophysical simulation system, is code currently being developed for but not limited to astrophysical simulations like CoreCollapse Supernovae. This will be once completed a valuable tool for astrophysicists and cosmologists alike because there is always a need for well-designed software capable of handling such complex physics for research. A couple of key characteristics for the development of astrophysical simulation code such as Genesis are the creation of a mesh, which is the foundation on which the physics is displayed, and the implementation of solvers for hydrodynamic problems like the Riemann problem. The Riemann problem is a common problem in fluid dynamics, and it is the application currently implemented in Genesis that we used as our test ground for this project. Due to the computational demand of these problems, it is important that there be a way to accelerate the computation. For that, we use GPUs, and this is where the GPU programming comes into play. Because Genesis is primarily written in Fortran, OpenMP offload for Fortran was the first of these GPU programming API to be implemented into the kernels, and the task now was to port all of these kernels from OpenMP Fortran to CUDA for C. Now, to port these kernels was easier said than done because there were three hurdles that we had to overcome in order to have our code running successfully. Firstly, we had to learn how to program using CUDA because we went into this project with no previous CUDA programming experience. Fortunately, there were many resources online available to teach those new to CUDA. Then there was the issue of calling these new CUDA kernels in the pre-existing Fortran code. For that, we needed to create an interface between the two languages. And lastly, we needed to convert a 3D array from Fortran into a 1D array in C. Some of us may know that Fortran array indexing is quite different from C array indexing. The first hurdle was pretty straightforward to fix. Learn CUDA. We accomplished this by creating our own simple test program that was modeled by our simplest 1D kernels. Compared to OpenMP offload, where one simply has to add a compiler directive around the appropriate kernels to get it running on the GPU, CUDA requires a more in-depth implementation to the code. For starters, to declare a function, we must add the global keyword as seen in the kernel declaration section. And this tells the NVCC compiler which functions to run on the GPU. And the NVCC compiler is used to compile the CUDA code. Now, the CUDA environment is split up or can be split up into three different elements. Grids, which are made up of blocks. Blocks, which are made up of threads. And the threads are the smallest part of the environment, which is used to compute the kernel computations. Well, to execute the kernel computations. These three elements can be manipulated by the programmer to set up their own unique environment to distribute the work amongst the threads. Now, in the second example, which says the environment setup, we see that there is... Well, this is an example of one of our setups for our CUDA kernels. And the DIMM3 keyword is a built-in type that's provided by CUDA. And it can take one, two, or three arguments. So the... Both the blocks and the grids can be up to three dimensional. In the example, you will notice that our blocks and our grids are both one dimensional. And this was just the best option for our environment setup. Now, another key difference in CUDA versus OpenMP syntax lies in the kernel launch. Now, with CUDA, you are required to have the triple angle brackets. And within that, you can parallelize your function using the environment variables that you set up. So the block DIMM and the grid DIMM. After that, you can call your... You add the parameters as normal for CUDA. Now, lastly, the dimensions of the blocks and the grids can be referenced to create a unique global ID to index every individual thread that was launched at runtime. And this is important for flattening out our arrays. Well, it becomes important for flattening our arrays. Next, we needed a way to call our C functions in our main Fortran code. Fortunately for us, Fortran 2003 allows for interoperability with C. There is an intrinsic module made available for users known as ISOC binding. And it helps to create a seamless interface to call our CUDA C functions. So as you can see, the C function... The C function regularly, it's declared like this, where you have the type and the function name followed by the parameters. So in the Fortran interface, we declared a function in the following way. So we have the name of the function and the parameters here. And then there's this bind C attribute, which helps maintain compatibility. And the name here, we matched it with the name for the function that we have in the C code, just to avoid confusion. And then we have the pointers, which are declared in this way, followed by whatever other variables that we have. And that is just to help with calling the function more easily in the main function, in the main code. Of the kernels that we had to port in Genesis, there were two types, 1D and 3D. Translating from Fortran to C in one dimension was trivial. The challenge arose when we had to translate our 3D kernels to 1D. Now in Fortran, multi-dimensional arrays are handled just about the same as the one-dimensional ones. So from declaring these arrays to referencing them to using array-array operations, they're the standard rule. But it's not so simple in C, especially when it comes to working with pointers. And this is why we had to convert every array used in our kernels to a one-dimensional one. So our original thread ID that we declared previously, and as you can see here, worked great for 1D, 1D translation. But when it came to the 3D to 1D, we had to map out the we had to map those 1D threads to 3D indices using the following formulas for the respective IJK indices. The following diagram depicts the conversion from a 3D Fortran to 1DC. The function f on the arrow gives every cube on the Fortran side a unique index in our 1DC array. So each cube represents an element in the arrays. And the IJK variables are the formulas given on the previous slide and are in terms of the 1D thread index. And then maxx, maxy, and maxz represents the largest value in the respective dimensions. So these are the final variables after porting each of the kernels. So from our case step here, all the way to applied boundary conditions kernel, these are the results that we got. Now the most significant timing difference is here, where there's a 20% speed up between the two kernels from OpenMP to Kuda. But the overall timing, the overall computational time is not as significant with about 7% speed up, which falls under the normal percentage that we determined to be significant. So from these results, it's clear that OpenMP and Kuda can work interchangeably because the timings that we got for the different kernels was not too significant, so there was not a loss in efficiency when going from OpenMP to Kuda. So the OpenMP offload compiler does a really great job of optimizing the code when you compile it. This is important because all this was done on the summit supercomputer and the summit supercomputer has NVIDIA GPUs which runs which supports Kuda programming. Now the Oak Ridge National Lab is getting a new supercomputer called Frontier and this supercomputer has AMD GPUs which only supports HIP now. HIP is very similar to Kuda in terms of how you implement your kernels to run on the GPUs. And so having the Kuda code, we can directly translate to HIP so that we can run and have applications to run on Frontier in the future. And this is great because the results that we showed would give us gives us an idea of how well that we can run our new code and we wouldn't lose any timing, any difference. We wouldn't have that much of a difference in timing and so it will all be well. Special thanks to the Oak Ridge National Laboratory for granting me the opportunity to do research with them and using their facilities like the summit supercomputer to complete my research. I also want to thank NVIDIA and OpenMP both corporations for creating both Kuda and OpenMP offload so that scientists like myself can use it to further study and do our research. Thank you for listening and I will be available after this to answer any questions that you may have. So welcome back to the HPC Big Data and Data Science Deb Room for the afternoon session and thank you very much for Alina for joining us and getting out very early in her time zone for this question and answer session. Thank you for the talk and as Kenneth has pointed out, thank you for the good graphics work on the slides. The first question was the highest question so far is from Trolls. If I can answer your name wrong, sorry Trolls, I apologise, Trolls Hendrickson. On why use Kuda 3 instead of Kuda 4Train? So the main reason was because Kuda 3 was easier to work with than Kuda 4Train because I remember when I just started working with my mentor, I asked him, so why not Kuda 4Train? He said, let's not think about Kuda 4Train because he didn't like how it was like the interface and how it was with four users. So it was easier to work with Kuda 3 and I agree. So possibly related question. In your opinion, do you think that libraries like Kuda are high enough level with respect to program their programmability for humans or should they hide more away from you? Do we need a higher level interface? In terms of programmability, what exactly do you mean by that? Are the details of using Kuda too fiddly? If there was just a function called to multiply my matrix on a GPU, you didn't have to worry about copying the memory in and out or where the memory resided or all the other setup steps. If it was just a single line of code to do GPU magic. I think for OpenMP, OpenMP is definitely high level so it's easy. You can just put it in your program and you're essentially done when it comes to OpenMP offload. But with Kuda, you have to do a little more programming and make sure that you translate your code to the GPU. So that needs to be done by the user. But once you get the background stuff working and all that then it's straightforward from there. I think that's just my personal experience. Yes, I guess Kenneth was asking that is it good enough? Is the work you have to do to use Kuda too much work or could we do better than Kuda? And it sounds like you think OpenMP is an easier interface. It's definitely easier for sure, OpenMP. But once you get the hang of Kuda, because there's definitely a learning curve for Kuda. So I think we could probably do better than Kuda. But for now, I think it's one of the, I think it's pretty good in my opinion, like the speed and how we get stuff running on the GPU. And it's fun to learn programming in general. So in that case, what I think I'll do is I'll skip to Bert's question, which doesn't quite have the two votes that Kenneth does. We'll come back to Kenneth. But Bert asks, have you tried using a higher level programming languages together with GPUs, such as Julia? No, I haven't. Sorry about that. Yeah, I haven't used Julia. So I'm not sure. I haven't tried using a higher level programming language is like Julia. I've only worked with OpenMP and Kuda and a little bit of HIP. But HIP and Kuda are essentially the same, in my opinion. Which I guess that's already answering Kenneth's question. Have you considered looking into HIP? Yeah, we actually did. It's one of the reasons why we did, we used Kuda Fortran because, like I said, at the end, for Oak Ridge, they're getting a new supercomputer frontier and it's being implemented, well, being built as we speak. And it supports AMD GPUs and the AMD GPUs only support HIP. So we wanted to use Kuda in the existing applications that we already have. And we wanted to see how the performance, if we lose efficiency in what we already have. And if not, then can we move it to, so that we can read it on the new supercomputer. So we have something to run, something to test. And so we looked into HIP and I tried porting a few kernels from Kuda to HIP to see how easy, hippifying the kernel was. And it was, I just had to put in one line of code in the supercomputer and it was ready to go to run. I had to make a few little tweaks, but other than that, it was pretty straightforward. So yeah. Ehex asks, or pointed out that OpenACC is another player in this area. So I've seen people make distinctions between the different between OpenMP and OpenACC about the ease of programming. Have you had a look at that for Genesis? I haven't like doubled in OpenACC, but during my time at Oak Ridge, there were a couple of talks and workshops with OpenACC and I was able to look at it and well, I was able to join those workshops and learn a little bit about OpenACC, but we never got the chance to implement or test any implementations of OpenACC in Genesis. So I've got a question about how long did it take for the effort to port from Fortran to Fortran OpenMP to Fortran plus C plus CUDA? Well, I was really too busy in Fortran, so that was like, I had to like learn, well, relearn a lot of the syntax and stuff because I was working with them, my mentor prior to this project in the previous year. And so I had to learn about Fortran and then I stopped. So like, I kind of forgot everything I learned and then I went back into it. And then I had to learn about programming in CUDA. So I had to do, I had to first make a little test program so that I could understand how to implement CUDA and work with the kernels. And then after that, when I got everything running and I tested the timing between the different kernels that I had, then it came to actually working on the code. So it took quite a while, especially when it came to testing, making sure that it ran the same because we had a visual, we used visits to visualize the overall, it was a little bit that we had and it was doing the simulation for the Riemann problem. So we had to make sure that everything worked well when it came to the 3D kernels. We had to figure out how to convert from 3D in Fortran to 1D in C. And the main reason we use 1D in C is because it's pretty challenging having to get the, the, do 3D arrays in C. So I had to use 1D, 1D C. So there was that, there was a lot of research, try to figure out how to convert from 1D to 3D and then 3D to 1D. So it took a little bit of time. Like I would say maybe eight of my nine months working there. Right. So if you're coming from zero experience, you can do it in that amount of time. And maybe if it was two sets of programming languages and models that you're already familiar with, you'll be less than that. So it's, right, right. Like maybe if I had known about CUDA earlier, could have cut that time down. So I've got a question about GPU hardware. So Frontier is getting AMD. So that's why you're looking to hit. Intel are coming back into the market with their XE GPU. Would you consider keeping the, the, the Genesis code in some higher level thing like OpenMP so that you could make use of three different vendors to GPUs? I think, I think so. I think the main goal is to make Genesis as portable as possible so that many people can benefit from it, especially the main target audience with astrophysicists and cosmologists with their simulations that they would want to create. And so I think if when Intel gets their XE GPUs, that's what it is, right? XE GPUs. Yeah. When they get that, then we'll probably have some, I mean, it might not be me, might be some other students, get to learn about how to program using that GPU so you can get it running. So we've just got the warning that there's, there's one minute less than one minute left. Oh. Q&A session. So I think, I think we've probably had the last, last question and answer.
In this talk we explore two programming models for GPU accelerated computing in a Fortran application: OpenMP with target directives and CUDA. We use an example application Riemann problem, a common problem in fluid dynamics, as our testing ground. This example application is implemented in GenASiS, a code being developed for astrophysics simulations. While OpenMP and CUDA are supported on the Summit supercomputer, its successor, an exascale supercomputer Frontier, will support OpenMP and translate CUDA-like models via HIP. In this work, we study and describe the differences and trade-offs between these programming models in terms of efforts and performance. Our hope is to provide insights on productivity and portability issues within these programming models.
10.5446/53676 (DOI)
Hello everyone, I'm Adi, a PhD student at the National University of Singapore and today I'm going to present our work on accelerating HVC applications with auto-water commit processors. In this presentation, first I will give an introduction on how we should design and optimize general purpose CPUs when we are using specialized computing elements and also we will talk about a phenomenon called the memory ball. Then I will give you some insights how to use a hardware software co-designed to address this problem. Then I will present our solution on non-aspect relative auto-water commit with branch reconvergence analysis. Also I will demonstrate our solution with some experimental results and then at the end I will conclude this presentation. In today's HPC systems we use specialized or domain-specific computing elements in order to improve performance and efficiency. These specialized computing elements include GPUs, FGAs, ASICs and many others. But the question is that is there a civil place for a general purpose CPU in HPC? In this slide you see a figure from an IEEE micro paper at 2012 that describes how the behavior of workloads in CPU changes when we move from a CPU only implementation to a CPU GPU implementation which means a portion of the code has been uploaded to the GPU. In this graph there are three sets of benchmarks. First we have benchmarks that nothing has been mapped to the GPU. Second we have mixed benchmarks that the workloads has been evenly distributed between CPU and GPU. And third there are benchmarks that most of the workload has been uploaded to the GPU. And here our focus is only on the second and third sets of the benchmarks. And on the Y axis of the graph you see the average number of instructions that have been executed in parallel within the instruction window or also known as instruction level parallelism or ILP. And the instruction window also means that the number of instructions that CPU can examine at once to find more parallelism. As you can see when we move from a CPU only implementation to a CPU GPU implementation ILP drops above 5 to 10 percent. And this this is mainly because less regular code is left for the CPU to execute. Hence after moving to a CPU GPU implementation there is less parallelism for the CPU which means it's more difficult to improve performance. And in these graphs we have the same benchmarks and on the Y axis you see the breakdown of memory load instructions that are classified based on how hard it is to get the data from memory. A striated and patterned load can be easily handled by existing hardware and software prefetchers. But hard memory loads are difficult to predict and prefetch. As you can see when we move from a CPU only implementation to CPU GPU implementation the percentage of hard memory accesses in CPU increases about 17 percent. And this means that CPU spends more time to get the data from memory and hence it's more difficult to increase performance. And in this figure we see memory vol which is an important issue and in this graph you can see why. Performance of computing elements like CPU increases about 16 percent per year. But memory performance is increasing only 7 percent per year. And we anticipate that this gap will grow more with the end of Moore's law and then are the scaling. In this slide I'm going to present an important structure in out-of-order processors which is called reorder buffer. And people use this structure to enforce the inorder release of resources and provide precise etc handling. And when instructions are coming to the processor are inserted to this queue in program order and this means that the instruction at the head of the queue is the oldest instruction and the instruction at the tail is the newest one. And let's assume we have a code like this which is an if then else statement. We have a branch and also a load before the branch that is going to get the data from DRAM and this means it's going to take a long time to get the data. These instructions are inserted to ROB like this and the question is that if the branch is dependent to the load instruction and need the data from the load instruction what are we going to execute after the branch and fill the ROB with those instructions. Previous solutions come in variety of flavors. In the first class that we call it early speculation the goal is to predict and prefetch data before the program access that data. And then the branch uses these especulative data to execute one of the paths after the branch and fill the ROB with those especulative instructions. And there are previous numerous solutions based on this idea like prefetchers, value prediction, in-memory computation and data recomputation. In the second class of solutions we have late speculation which the goal is to continue execution after the branch even without the data. And this means that we predict one of the paths and then fill the ROB with those especulative instructions. And to name a few of these solutions we can point to branch prediction and run ahead execution. But the question is that can we find other non-speculative work after the branch? And the answer is yes. When we are able to detect these independence regions after the branch we fill the ROB with these instructions and when they are non-speculative we can release the resources of these instructions earlier. And our contribution is to use static compiler analysis to detect these branch independent regions. As you have seen many of branch dependencies in the hardware are artificial and not all instructions are dependent to the most recent branch. And this is really important to detect these true branch dependencies. And the idea of our work is to use true branch dependencies to release resources as early as they are non-speculative but not necessarily in program order which is implied that we are going to release resources and commit instructions out of program order. Also in this project we are using open source hardware and software. For the compiler implementation we use LLVM as an open source compiler infrastructure. And also we build our system based on RISV ISA that is really good for exploring hardware software cooperative strategies. And also it's really good to make an efficient communication between different layers of the system and also the simplified implementation of the instructions that architecture it makes it easier to handle exception cases. Now I'm going to give you an overview of our solution. In our static compiler analysis the goal is to detect true branch dependencies and mark these dependencies in the code. And in this figure you see a simple if then else statement and we have a branch in the first block of the code. In the first step we find the reconvergence point of the branch and reconvergence point is the point in the program that the execution will reach that point regardless of the branch outcome. And in the second step we detect control dependent instructions that these are instructions that they can possibly execute between the branch and the reconvergence point. And in the third step we detect data dependent instructions then and these instructions are those that their data is going to be different if we execute different paths after the branch. And also at the last step we detect independent instructions and mark these instructions as independent from the from this specific branch. And here I'm going to give you a high-level overview of the processor and I will give you the details in the coming slide. But basically in the front end of the core we want to track true branch dependencies and all the information that we get from the compiler. And in the back end of the core we have a lightweight implementation for reordering instructions based on the branch dependencies. As you can see when we decode an instruction in the front end, based we use these setup instructions to get the data from the compiler. And when we are inserting, sending the instruction to the back end, we insert them to the ROB and then from the ROB we distribute these instructions to different commit queues and then we use a table called the commit queue table to do this distribution. And this means that all instructions in one commit queue are dependent to only one branch and they only have to for that branch to be resolved in order to to to release their resources. And also when we commit an instruction we have a table that tracks of all out-of-order committed instructions and we call this table as CIT and we are going to use this CIT table to recover from a branch mispredictions and I'm going to give you the details in the coming slide. Here is a walkthrough example for the compiler that we have. As you can see that we have if then else a statement and we have a branch in the first block. And in the first step we detect the reconvergence point of the branch and in this example it's labeled two in the last block of the code. And in the second step we detect control dependent instructions and also in this example these instructions are marked in the red area. And in the third step we detect data dependent instructions. As you can see that in the last six instructions in the last block of the code we are using the minus 20s0 and minus 24s0 locations. And you can see that these locations are updated with control dependent instructions in the red area. And this means that the data that we have in these locations is going to be different if we pick different path after the branch. But the first four instructions in the last block are in the are considered as independent from the branch. And in the last step we are going to mark these regions. As you can see that for the branch we use a set branch ID instruction that is going to assign a unique ID to the branch. And in this example we are assigning one as the ID of the branch. And also for all the other regions we use a set dependency instruction that says for example after label one the next eight instructions are dependent to branch one. Also you can see that the first four instructions in the last block are independent from branch one, but they might be dependent to an earlier branch like branch zero. And now I'm going to give you the details of the core, our auto-order commit core. And we have three main flows in the processors. First is the branch dependencies flow and then second is auto-order commit flow. And third is the recovery flow that we have from branch mis predictions. Basically the goal of the branch dependency flow is to propagate the compiler information to the processor. It means that when we decode these setup instructions, set branch ID and set dependency instructions, we are updating the branch ID table and dependency counter table to keep this information. And when we are inserting instructions to the ROB, we assign a branch ID to each instructions and this tells us that each instruction is dependent to which branch. And in the auto-order commit flow of the processor, we are basically doing, it's the implementation of the ordering instruction based on the branch dependencies. And for this purpose, we have the CQT table and we look up this table to distribute instruction between different commit queues. And then when we commit and release a branch from the pipeline, we update all the previous tables in order to have the space for the next coming branches. And also in the recovery flow from branch mis prediction, we have a table called CIT and when we commit an instruction out order, we insert that instruction to the CIT. And when we have a branch mis prediction, when we are going to refit and out-order committed instruction, we look up this table in order to prevent the re-execution of this instruction. Now I'm going to present some experimental results based on our solution. For simulation, we use the gem5 in system call emulation mode. And for the compiler implementation, we use LLVM10 with RISC5 back-end. And for the benchmarks, we use SPEC CPU 2006 and MyBench benchmark suits. And also you can see the system configuration that we have used for simulation and we are using a Skylake-like processor as the baseline. In this figure, you see the performance improvement of our solution compared to other ideas for different applications. In order commit is basically the implementation of most existing processors. A non-especulative out-order commit is a special version of out-order commit that we wait for all branches, loads, and stores to be non-especulative and then commit instructions. And reconvergence out-order commit is our work that we have implemented. And also we have an ideal version of our solution that means the size and the number of cues, the commit cues is unlimited. And also we have an aggressive, speculative out-order commit idea that is kind of the upper bound of our solution. But it is not an implementable idea. As you can see that we can get 22% performance improvement on average and this improvement can get up to 230 performance improvement. And also we can reach 95% of fully especulative and aggressive out-of-order commit that is the upper bound of our solution. And in this figure, I'm going to show that what is the reason of the performance improvements that we get and what is the reason of the different improvements for different applications. And here I'm going to focus on two applications, BZIP2 and MCF, which BZIP2 shows the least potential for out-of-order commit and MCF shows the highest potential. And in this figure, each point in the figure represents a branch in the program. And on the X-axis, we can see the number of instructions that are dependent to that branch. And on the Y-axis, you can see the criticality of the branch. And basically it's the number of cycles that branch has installed the Oropea and has blocked the processor from executing more instructions. For BZIP2, you can see that we have more dependent instructions per branch and these branches are not very critical. And this means that we have fewer opportunities to improve the performance. And that's why we are only getting 10% performance improvement. But on the other hand, we have fewer dependent instructions for MCF. And those branches are more critical. And this means that we have more opportunities for performance improvement. And that's why we are getting like 230% performance improvement. Also, we are going to investigate the impact of the size of resources on the performance that we get from our solution. For the smallest score that we have investigated is a May-Holland-like processor. And then we have a Haswell-like processor. And also, we have an Skylake-like processor that is the biggest core that we have investigated. And you can see that for SPEC CP2006, we are getting so close to aggressive and especulative outward economy. That is a 95% of the upper bound. And also, when we are increasing the size of the resources, we are getting a higher performance improvement. And this means that when we have a bigger core, we have more opportunities and more resources to take advantage of out-of-the-commit solution. And here you can see that we get 4% overhead on power and 8% overhead on area of the processor. But these overheads are low compared to the extra performance that we are getting from our solution. And if you want to read more details on this idea, I can refer you to our paper that is going to appear in Aspilus 21. And in conclusion, we could see that the design and the CPUs matters for HVC applications. And also, we have seen that an efficient communication between software and hardware and different layers of the system is going to give us more intelligent resource management and give us more extra performance improvement that we can get. And also, using open source software and hardware is going to give us more flexibility to explore hardware software cooperative strategies. And thank you so much for your time.
With the end of Moore’s law, improving single-core processor performance can be extremely difficult to do in an energy-efficient manner. One alternative is to rethink conventional processor design methodologies and propose innovative ideas to unlock additional performance and efficiency. In an attempt to overcome these difficulties, we propose a compiler-informed non-speculative out-of-order commit processor, that attacks the limitations of in-order commit in current out-of-order cores to increase the effective instruction window and use critical resources of the core more intelligently. We build our core based on the open source RISC-V ISA. The hardware and software ecosystem around RISC-V enables building custom hardware and experimenting new HW/SW cooperative ideas. While modern out-of-order processors execute instructions out-of-order to increase instruction-level parallelism, they retire instructions and manage their limited resources (register file, load/store queue, etc.) in program order to guarantee safe instruction retirement. However, this implementation requires instructions to wait for all preceding branches to resolve in order to release their critical resources, which leaves a significant amount of performance on the table. We propose a HW/SW co-design that enables non-speculative out-of-order commit in a lightweight manner, improving performance and efficiency. The key insight of our work is that identifying true branch dependencies, if properly understood, could lead to higher performance. Dependency analysis shows that not all instructions depend on the most recent branch in the reorder buffer and therefore, there are missed opportunities to improve the performance by not releasing the critical resources of independent instructions. Our processor employs a HW/SW co-design where the compiler detects true branch dependencies that enables the hardware to manage critical resources more intelligently. Also, we introduce a new interface between hardware and OS to enable precise exception handling by exposing recent changes of out-of-order committed instructions. In our talk, we will look at the potential of our out-of-order commit core for HPC workloads. Initial studies with C-based HPC applications show promising results, and we intend to show results for a variety of additional HPC workloads to evaluate the potential of the design. We believe our HW/SW co-design might be a way to build the processors in the future. This work will appear in proceedings of the 26th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2021).
10.5446/53677 (DOI)
Hello. I'm Robert McClay. I've been working at TAC for over 12 years in HBC. It's good to be here, virtually. I'd like to thank Kenneth and the other organizer to put this thing together. I am sure that it's more work than I realize. So today I'd like to talk about my Exalt project and what it's been like to attach to just about every program under Linux. So first I'd like to thank my work colleague Matt for the sign of the Exalt Logo. Big Brother is watching. Work started on Exalt in 2013 as an Exalt as a NSF project. Today I'm going to talk about what Exalt is and how it works and why Exalt is of interest in the HBC community. Then I'll get into the main part of my talk where I describe this prize that have happened. They have occurred in three main areas. Name, collision, memory, allocation, and container issues. Finally I'll end up where you can get more information about Exalt. So I work at a supercomputer center and working there means we want to provide good service to our users. It means that we want to know what programs and libraries they run so we can know what software to support or not. We also use the top programs to benchmark our next system. We all know from accounting records what our top users are but not what programs they use. With Exalt we do. We'd like to know a lot of things about the program but I want to stress that Exalt is a census taker, not a performance tool. I'm interested in science so I'm interested in really longer runs. The way Exalt works is it's going to produce JSON records and then use it toward a database for later analysis. It's used all over the world. What we're trying to do is understand what our users are doing. So we want to know what programs and libraries you use. We want to know what imports from R, MATLAB, or Python using so we can track each one of those major packages. We want to know what the top programs are by several measures. Core hours, i.e. number of processors by how much time or just by how often it runs or by the number of users. We want to know whether the program executable was built by the system, namely by us or was user built or it was built by one user and used by another. We can do that all with Exalt. We want to know whether the program was written in C, C++ or Fortran and by that I mean what main is written in or more importantly what the linking program was used. We can track the number of MPI tasks. We can do threading. We can even find out what functions are used by the executable. So we can know what MPI functions there are, what BLODS functions there are. And as I'll point out, I need Exalt to be as efficient as possible. So we want it to be as lightweight as possible. So when it's doing a recording, it still takes less than 100 seconds. And when it's not recording, it's amazingly fast. So I have Exalt documentation at Read the Docs. And through that, I've been able to connect it to Google and Linux. And in this slide, I've shown which cities have read at least part of the documentation. The larger the circle means the greater number of times the documentation has been read. I really don't know where Exalt has been used, but I do know where it's read. I do have some clues when people submit bug reports, but there are a lot more people reading the documentation than I've ever seen on the mailing list. So you can see that Exalt is, the documentation has been read all through the United States, through Latin America, through South America, all over Europe and Russia, in India, China, some in Africa, some in Australia and New Zealand. So I could say, fairly safely, that Exalt is used all over the world. Now one of the interesting things is, while the good news is Exalt can track every program, the bad news is that Exalt tracks every program. I cannot record every user program. There's just too much data. Now Exalt has a filtering mechanism to avoid lots of programs. So it doesn't track bin LS, it doesn't track bash, it doesn't track lots of things. But still, when every user program is just too much data. On our 132-node computer, a single two-hour job had over 2 million executions, and that generates so much data that it took my VM, which contains the database, close to five days to read the data. This forced me to implement serial that is non-MPI programs to be sampled. These are the rules for use. And what I mean by sampled is, there's a random number generator and it says you have some probability of being recorded. And it's just completely, each choice is random. So for short-time programs, i.e. programs that live for a short time, they're only recorded one in ten thousand chance. For medium-time programs, there's only a one in a hundred chance. And for long-time programs, it's every one of them. So that's how I cut down on the amount of data I'm getting. So I don't get the database, just not get overwhelmed. So how does Exalt work? Here I'm showing a simple Hello World program written in C. This is followed by Exalt.C, which I will build into a shared library on the next slide. There's nothing special about the routine names MyInit or MyFini. It is the attribute lines that register one routine into the init array and another into the finnir array. The init array and the finnir array are special part of the ELF binary format, which is what Linux uses to build its binary executables. Any routines assigned there will be run before main or after main. This ELF trick works with programs written in any language, C, C++ or Fortran. And I believe these are there to support C++, where C++ has constructors that can run before main and destructors that run after main. So this is how it gets implemented. And these attribute lines are available in GCC. I'm not sure where else, but that's all I need. So let's show how this works. The beginning of the program is building the Hello World program and it runs at expense as expected. The interesting part is after building the shared library, lib exalt.so and setting the LDLD preload variable, we have modified the behavior of our Hello World program. This is without consulting or in any way modifying the source or the executable of the Hello World program. Now, if this doesn't work with static link programs, however, it's pretty difficult these days to build a static link program anymore. There aren't many distributions that include a lib C.a. So exalt is now linking with every program. And it's kind of like being a developer on every program that runs under Linux. Exalt shares the same namespace as the user program. Exalt shares memory allocation. There's just one malloc. Exalt shares the environment space. And with containers, exalt can't depend on system libraries. So sometimes I've fixed broken programs. So let's get to the first place where exalt ran into a collision with our users. So I have to do something to avoid namespace collision. I use macros to obfuscate my normal routine names to avoid collision with user routine names. So, for example, my init gets renamed to double underscore exalt underscore init. However, I can't protect system routines. A user code had their own random routine, which was linked in and used rather than lib C1. The calling sequences for their random routine was completely different than the system one. And so the user program aborted even before main was called. This was my first user exalt collision. And I've had all kinds of issues with lib uuid. So what is lib uuid? Lib uuid is the program used or is a library that will generate a unique universally unique identifier. That's what the uuid stands for. And I use that to tag every executable that runs and also every build that runs in a different scheme. But I still need these values. So exalt uses dl open and dl symt dynamically link with lib uuid. Sorry, lib uuid. And I'll get more on this when we get to containers. So memory allocation use issues. Exalt runs before main start in what I call my init. And it allocates memory. And for MPI programs, it can, for some MPI programs, it generates a start record. Exalt saves environment variables in the program user space in my init. Then after main runs, after main completes, if it does complete, then in my finie. And this is where I allocate memory as well. And this is where it generates end records for all programs. That is the JSON record that I'm going to store in the database. And I want to make system calls from my finie. This slide also is a foreshadowing of problems to come. Each of the four points will become an issue. So here's my second collision. The user code, a user code created a link list. Their program failed to initialize the end of the link list with a null. But Linux very kindly zeros all memory before the program starts. So the user code automatically got the null terminating. However, exalt, freeze, allocates and freeze memory before main has ever run. So the user code ran fine without exalt, but fails with exalt. So here's where I became the developer or joined the development on this code. I had to go in and figure out why he was doing this. And notice that they weren't initializing the link list. However, this is not a scalable solution. So what exalt does is if any memory is allocated and freed, it's zeroed before its return. This should avoid this problem with user code. My third collision was memory, you know, exalt is sharing memory allocation with user code. And it's shocking to find out that all programs don't allocate and free memory correctly. And exalt would sometimes fail when freeing memory after main in the MyFini routine. The result is that exalt allocates but doesn't free after main is called. This seems to work. It's freeing that tends to be more dangerous. Or at least that's been my experience. So here's my fourth collision. So some programs do questionable things. The questionable comment was how this bug reported to me. A user out in the world sent me a snippet of where the code, where some shared package lived and sent me this block or sent me a block which I have reduced to show the basic intent of what they were trying to do. What happens is this code in main is going to try and copy the environment. And the SPT copy and B routine takes the old environment, clears it, and then walks over it and reinitializes it. Now, this is not good. And I still don't know why somebody would want to do this, but this is what they did. And this code works without exalt. It's the first thing that is running their program. It works if no set ENV is called before the SPT copy and B routine is called. Well it frees memory and then walks the free BIMI. However, exalt sets environment variables and mine it so that this sake vaults with exalt. So I had several iterations with the developers. I had to explain that exalt runs before main so they're guaranteed that no set ENV was called was no longer true. So they, after several iterations, they were able to do a shallow copy of the environment and rebuild the environment variable table. And it clearly states in the manual that clear ENV doesn't erase the contents containing the environment definitions, just the pointer to it. So my fifth one was earlier versions of exalt used to use C++ routines to store the JSON strings as hash tables so I could write them out. And the main exalt library, exalt lib exalt init.so is written in C. And so it would work exact to call these helper programs to actually generate the tables. But a particular version of the MPR library doesn't allow for forging set after MPI finalize. So I had to go in and rewrite the hash tables in C using the there's a public domain ut hash.h and I had to merge it, merge it in with the JSON generation for the in the C library. UTH hash.h is a public domain thing that supports hash tables and other kinds of tools that I found very useful when I had to write these C routines or the C++ notions in C. Users continue to surprise me. And one of the things that was surprising is somebody, at least one group, is using deep learning programs, use short MPI programs to train their neural networks. So it became clear that I not only do I have to sample scalar programs, non-MPI programs, I have to sample MPI programs as well. And if I'm sampling, I do not want to generate a start record. I just don't want to have data that I have to throw away. However, some MPI programs are long running and do not terminate. They are typically simulation programs, say a weather code, which is running forever and they just write out restart files every so often. And when the job terminates because they've used up there 24 or 48 hours of runtime, they just pick up where the last restart file was written and continue from there. So I will never get an end record from those kind of programs. And those are the programs I'm interested in capturing data on. So I have to generate start records from there. So arbitrarily, I have decided that if you have 128 tasks or more, I generate a start record. And if you have less than 128 tasks, there's no start record. And again, these are site configurable. So containers. I am not a, like Kenneth, I am not a fan of containers. However, they are here and I need to support them. So Exalt when it configures can require certain libraries like live UUID.SO on the host, but I can't do that on containers. Exalt still requires live UUID.SO but does not link to it. And by the way, the notion of doing this statically doesn't work reliably because typically there is no live UUID.A available. And also Exalt used to require lib Z and a few other libraries. This was so that Exalt could reduce the size of the syslog message. What I used to do is, so Exalt is generating JSON records. There has to be some way to record that data from the user code. I can either write to a file or I can write to syslog among other choices. So what I used to do was call compress and then base 64 to encode the message and then I would base 64 decode it and decompress when I was inserting the data. But I have a problem with that. So what Exalt does is it copies the system library live UUID.SO to the install directory. Then it does a DL open on the direct path. It turns out that you cannot modify the environment variable LD library path and have DL open search the new one. It only searched the one at the start of the program. So, but you can specify the absolute path as something you want to DL open. So that does work. So what I do is I know where the Exalt path is and I just try and link to it. And then DL SIM necessarily has the necessary or I use DL SIM to connect to the necessary teams to build the UUID that I need. However, this doesn't work for libz. I can certainly copy libz to the shared library, I mean to the install directory, but the compressed routine is actually a macro and it has the version of compress in the actual arguments. And for all I know, they could change the definition of that without me knowing it. So I can no longer depend on having a compression library and I don't want to write my own. So instead, what I did is I just no longer compress on base 64 and code when writing the syslog. I just send the straight text. And that works fine. So in conclusion, this is my experiences dealing with user codes throughout the people who use Exalt. I want to reiterate that this happens rarely. I've got seven or eight times where I run the problems and this is over millions and millions and millions of choices or millions and millions of executions that Exalt has collected. So it's a small area, but it does happen. So if you want to find out more about Exalt, you can go to the Exalt documentation and read the docs. And if you want to download the source, it's available from GitHub. And if you want to join the mailing list, you can join it through sourceforge. Thank you very much. I'll now entertain questions. Hello? Yeah, I'm here. I just looking at a question and you answered the last urge. What about Liga and Zipkin? This is the first time I've ever heard of either of them. So I have no direct support for them. But again, Exalt has two parts. Exalt is generating JSON records and then they can be transported to whatever wants to receive them. Okay, that makes sense. I had a question. You mentioned that 132 nodes recorded quite a bit of data. Are there guidelines for people with larger systems with 10,000 nodes or more? How to change the short, medium, or long run? So Exalt, we run Exalt on our large systems. We have 8,000 nodes. And so the sampling rules which I described support our system. I just wanted to point out it, wanted to point out that 132 nodes, if you record every task, it's going to overwhelm the database.
XALT is a tool run on clusters to find out what programs and libraries are run. XALT uses the environment variable LD_PRELOAD to attach a shared library to execute code before and after main(). This means that the XALT shared library is a developer on every program run under linux. This shared library is part of every program run. This talk will discuss the various lessons about routine names and memory usage. Adding XALT to track container usage presents new issues because of what shared libraries are available in the container.
10.5446/53299 (DOI)
Hello, my name is Jason McDonald, I'm known around the internet as Cogemouse92, and today I'm going to be speaking to you about escaping the cargo cult. So what is the cargo cult? A cargo cult is imitating actions or structures without understanding them in hopes of achieving a related outcome. There's an example of that. You can see it's built at a strong. This is from the South Pacific. During World War II, Allied forces were occupying many of the islands and using them as bases of operation for fighting in the Pacific theater. When they would get supplies in via a plane, they would often trade these with locals for things that they could not get like fresh produce. The locals came to really rely on the food and medicine enough for supplies that the Allies would trade with them. In many cases, this was their first real contact with the Western world, so much of this stuff was quite advanced to their eye. There's a saying that any sufficiently advanced technology looks like magic, and that was certainly the case here. When the Allies pulled out after World War II, many of the people in these South Pacific countries wanted the supplies back, but they didn't really understand where they came from. They thought it was magic. It was supernatural that these were the gods working with their ancestors to bring these gifts to them. So they decided to start imitating what they had seen to try and bring the planes back with more supplies. They would build runways, control towers, and head phones of coconut shells. Very ingenious things. They would do US-style military parade drills, and they were trying to summon back the gods with these supplies that they wanted so much. The problem was they didn't understand what any of this was, so they were imitating actions and structures without understanding them in hopes of achieving a related outcome. Let's talk about Python packaging. Packaging is fun, right? Everyone loves Python packaging. I mean, this little platform certainly enjoys it. By the way, if you want that on your laptop, you can get that from monotream.club online. I want one of those cute little guys. It looks like a blast. Packaging is fun, except it's not. When it comes time to package our own projects, we're a bit lost and trying to remember how to do things. There's all these different conflicting tutorials and guides and long-drawn out documentation that we can't make heads or tails of. So we ask for help. And what do we get every time? Templates! We get templates. Here's a template for a repository structure. Here's a template for a setup.py file. Here's a template for manifest.in. Who knows what that does? And we just copy it over and cross everything we can cross and pray really hard and hope it builds. I should say. Sometimes it is compiling. Cargo colts are imitating actions or structures without understanding them in hopes of achieving a related outcome. Now the explanation I'm going to be giving here, breaking down what this stuff actually is and why it works, comes from my book Dead Simple Python, which is coming out later this year from No Starts Press. In this book I go into a lot of the idiomatic practices of the Python language and, more importantly, why we do things the way we do. So whether you've been programming in Python for five minutes or five months or five years, you'll find something in here that is insightful. And I have an entire chapter dedicated to packaging and another entire chapter to testing. So definitely check that out. But let's break this down. What are these different pieces for? Why do we use them? Now I'm not expecting you to memorize the setup.py file or any of this stuff. You can certainly watch the talk later, I'm sure. And I'm going to have some links at the end. But the main purpose today here is to help you understand why these things are there. So you can at least recognize the patterns when you see them again. So let's start with the source folder. It's recommended to put all of the source code for your packages into a dedicated source folder or so many articles tell us. Other articles might say, well, it's not really necessary. There's one particularly prominent article that rebuttals this, but it's out of date by several years. But the trouble is why. Isn't this more complicated? I mean, we like flat, overnested in Python. So why not just put, in this case, in my project, why not just put the time card file right into the main repository and call it good? Why does it need to be under a source directory? There are several reasons why we use source directories in Python packaging. And it is somewhat optional, but it does make everything work smoother. Chiefly, it simplifies your packaging scripts as you're going to see shortly. It keeps your source code and your packaging scripts clearly separate as well. Now, in a small project, that might not feel like a big deal, but when you're dealing with a large application with many components, multiple packages, and you're distributing to multiple systems in multiple ways, this really becomes helpful. So keeping your source code and your packaging scripts clearly separate is a benefit. It also prevents a lot of common packaging mistakes. Oftentimes, there's mistakes made because of assumptions about the structure of a repository or other things. And so this helps prevent some of those problems. A lot of this is because it forces you to install the package to run your tests. You cannot run your tests without installing the package that you're developing. This catches the assumptions about current working directory, which is a major problem when you're dealing with in-package resources like your icons or pre-written text files for like templates or maybe sound files in a game. You want to package this data in there, but how do you access it from the code? There's a thousand ways to do it wrong. Really only one or two ways to do it right. And the trouble is most of the wrong ways look right until you package it. So this catches those assumptions. And it ensures you've solved packaging early. I one time built a game. It took me about a month. I was very proud of it. It was an excellent little game. And I tried to package it and two years in counting still haven't gotten it packaged. And that comes down to the framework I was using. So I need to go back in and redo that. Had I tried to package early on in the process, back when it was still a hello world, I would have found out pretty quickly the problems I was running into. Now, reason a lot of us don't want to do this is because installing the package sounds scary. To run the test, I have to install the package, or how do I install the package? Hang in there. I will explain how that works, but we have to get a few pieces into place so we can install the package to begin with. The next piece we need for that would be setup.py. You see this file a lot. It's primarily used by setup tools. In fact, what I'm using here comes out of setup tools. And I recommend understanding setup tools even if you prefer to use poetry or flit for a few reasons, chief of which being that way if you encounter setup tools package and you want to repackage it using poetry or flit, you can do that. There are also a number of packaging tools like Snapcraft or Flatpak which make use of setup tools and setup.py. So it's just a good thing to know even if you prefer a different packaging scheme, it still really helps just in terms of understanding how this whole system works. So we have a shebang up at the top of our setup.py file and we import a couple of things from setup tools. Two functions we need, find packages and setup. Okay, this chunk of code here, which could be distributed throughout your setup.py in various ways, this is how you use the readme.md as the pypy project description. That means that my readme.md file which is written and marked down, winds up appearing in rendered form on the Python package index web page for my package. And in fact, if you look up Timecard app on Pypy, you will find that page right there. So this is how we do it. We use pathlib because pathlib just makes paths easier and then I, that top part there, I am just loading the readme.md file from the brooder to repository which is where setup.py lives. And then everything else in this setup file is going to be inside of this setup function call. And all of these little things like long description and long description content type are actually just named parameters passed to the setup function. So there's really no magic going on here. So I am passing along description in that I read in from the markdown file and I'm telling it to interpret it as markdown. I've also got some project metadata here. I don't need to go into this too much. You can read the documentation. I have a link to it at the end of the presentation. But I just have name, version, description, all that good stuff. You may also want to include some links for contributors. This is nice especially if you're listing on Python package index because then you have these neat little buttons off to the left where they can view the source, report a bug, donate to the project. That uses this. Of course if you're not listing on Python package index, a lot of that's a moot point but it doesn't hurt to have it there anyway. We also want to specify the search terms which is just a string of comma separated tags, keywords for the project. And we have classifiers. And these are like listing categories for PyPy. I like to have at least one for each of these categories, development status, environment, natural language, operating system, so you know what system it works on, intended audience, topic, license. And then you'll see there's several listings for programming language. I actually removed a few here. I have 3.7 to 3.6 in here as well because it runs on those and 3.9. But this just allows the user to find things that work on their system or their particular version of Python or whatever. So these are helpful if you're listing on the Python package index. And you can find the complete list of these in the PyPy documentation. Now let's get down to the nitty-gritty of this. These two lines are why we love the source directory. So you can have multiple packages inside of one distribution package or as we would call it, you have a source distribution and then you have a binary distribution or a wheel. So within that package, you can have multiple other packages. In this case, I only have the one time card, but that's a good point. These two lines work the same every time as long as we're using the source folder. So I tell it to look for packages in the source folder on that first line here. And then the second line, I say to find the packages in source. You have to have both of these lines. If you did not have the source directory, you'd have to do this the manual way and that gets rather annoying. You also want to be able to include your package data. And this is non-code data. So these are not your code files. This is your icons and your audio files, et cetera. And we're going to need manifest.in for this. So I'm going to come back around to this. This is the part most people can guess in the setup.py is the installation dependencies. So I can list what versions of Python this will run on. It'll run on anything 3.6 or greater. But I don't want to show up to the conclusion this is going to run on Python 4 because, well, my dependencies, I don't know if those are going to run on it and we don't know anything about Python 4. It doesn't exist. So I'm not going to make that call right now. I'll just say it. It needs Python 3 for a foreseeable future. And then it requires the PySide 2 library, version 5.11.0 or greater. I could continue listing inside because you see that's a PySide 2. That's in a list of strings. So I could list other dependencies in there as well. And then there's this extras require. I'm going to come back around to this. But these are optional dependencies. So I'm going to return to this a little later on. Now the big question is how do we start the program? And this is what entry points is for. This is one of the nice things about setup tools. So entry points, we specify a dictionary which has as its key GUI scripts or pencil scripts you can have either or both. And then as the value, you have a list of the functions that are being called for the different entry points. So the left hand side of the equal sign here in the string is the name of the entry point. This is the executable as it were when the package is installed. And so I'm using TimeCardApp. On the other side is the path to the function. So we have the TimeCard module or TimeCard package, excuse me, dot dunder main module colon main function. That is called when we run TimeCardApp. We do the same thing with console scripts for GUI or CLI, excuse me. So my text proof program, it's another program I packaged. Text proof is the name of the program. That's the executable. And then in the text proof package, I have dunder main module and I call the main function in that. So moving on from setup.py, we have setup.config. Why do we need this? This is really mostly a legacy file anymore. It only has one use. And actually, if you see any tutorials that have you do anything more than this one thing with the setup.config, my guess is it's probably out of date. And hopefully we aren't even going to need this for much longer. The whole idea is to deprecate this file and move everything into setup.py and into pyproject.toml. So hopefully we won't need this for much longer. Right now, the only thing we use it for is to include the license. So I have a file called all caps license in my repository. And this is my license file. That's the only thing we use it for. Manifest.in. This is the file that used to scare me for years. I'm not quite sure why I may be some bad memories using auto tools and C++. But the purpose of Manifest.in is to specify all of the non-code files that you want to include in the package. If you don't include them in here, it's not going to find them. So we want to include the repository documentation files. We have the readme and the license. We're using them in setup tools or in setup.config. So we have to include them. We also need to include all the contents of an in-source directory and time card in this case. Some icons and whatnot. So I can use a glob pattern here. Include in-source time card resources. Include everything. I could also use recursive include to include everything within a directory, including other directories. And I have a directory called distribution resources in my time card repository. And that just contains some stuff for distributing things like my desktop files for Linux. So I will recursively include those as well. So we have all that in place now. How do we install this? I did say it was easy. So here's how we do it. That's it. So in a virtual environment, I'm using the virtual environment out of place. I like doing that. You could activate the Vemv environment. When you have a Vemv environment made, but I'm just using it out of place. So within my virtual environment Vemv, I go in the bin file and I execute pip in there. It knows it's in a virtual environment, works with its surroundings. That's fine. We install a package. Simple enough. The dash E means editable. In other words, we're using the source code directly. Instead of copying the contents of the source file over into the virtual environments bin folder, we are instead just appending the Python path and executing the files in place in the source folder. The benefit to this is if I modify those files in the source folder, I don't have to reinstall the packages to see the changes. So that's helpful. And then that dot just means install according to the setup.py file in this directory, which is of course the root of a repository. And then to run it, I just invoke the time card app. That's the entry point I specified earlier in the virtual environment bin folder. That's all. That executes the program. But that leaves one thing left to do and that is testing. There's two ways we can do this. In-place tests or out-of-place tests. And if you're anything like me, you have spent more hours than you care to admit researching which one to do. So I'm going to answer the question for you, out-of-place tests. Why? So I believe your tests really belong in a separate directory from source. Maybe there's some use cases for in-place tests. So I'm not going to say this is a hard and fast rule. But in general, I think they belong in a separate directory. You may be wondering why not just put the test right in the source code. That's what I did for many years. This kind of goes back to the same reason why you have source. So by having an out-of-place test, you only run the test on your installed package. Forces you to install. The tests are going to wind up importing the package in the exact same way that the end user will. So the test cannot make any assumptions about current working directory whereas an in-place test could. The test has to import the package. So if there's any problems importing the package that the end user is going to experience, your tests are going to have the same problems. It also prevents you from accidentally shipping tests in the package. Now there are ways to include the test in the package using setup tools, using your setup.py file. More advanced, I'm not going to go into that. But you could do that. What you don't want to do is accidentally ship all of your tests inside of some huge program. You can imagine how bad that would be if you have a massive framework and you also have had an equally massive testing library and you included that in every single build or every single package. That would be a bit of a pain. So it prevents you from accidentally shipping your tests. You can do it on purpose but explicit is better than implicit. Now if we want to run these, we need to install our optional testing dependencies. And we do that on the virtual environments pip. Installing, again, editable mode since we're going to reinstall our program. What we do, this is in single quotes, last part is in single quotes because I'm in bash. So I have to do that to keep it from thinking it's some sort of glob pattern. But I have the dot as before. But then in these square brackets I say test. Where is that coming from? Well, that's coming from extras require. So that key on the left test, I could name that whatever I want but that is an optional set of extra packages. And so in addition to just pie test, I could include things like talks or coverage.py or any of my other testing libraries. If I had written some really crazy profiling stuff, they're required numpy. I could include that in here too. So it allows you to specify all the tests that all the packages you need for your tests without having to specify those in the main dependencies. Which is really helpful. It makes it a lot faster to be able to get someone on board into development. They just have one command to run to get all the test packages and they're good to go. Then you just invoke your test framework inside the virtual environment. Because the tests are relying on the package that you installed in the virtual environment, it just uses that. It runs the test directly on that package. This is actually what ultimately won me over to using source. It's so much simpler and I knew that my tests were reasonably reliable. That it was working on the package the same way that the end user was going to wind up using it. And it does make life so much easier for testing and deploying. So if you want to see any of that again, because there's a lot of code there, again, not expecting you to memorize that. You can find both of the projects I talked about on my GitHub. So github.com for slash code mouse 92 slash time card. And that's actually a very useful program if you try to keep track of time while working. So you can download that. And then text proof is the other one. And I really only built that to demonstrate some principles of testing for my book. But it's useful in its own right if you want to be able to do a grammar check on the command line. You can take a look at that. You can also find documentation for this at packaging.python.org. The Python packaging authority has really improved this whole experience documenting how things work. So I really appreciate them in that regard. So packaging.python.org and they explain the whys and the wherefores of all this stuff. So I hope that's helped you escape the cargo cult. This is actually a lot deeper topic. I just scratched the surface, but hopefully you can see now that it's not quite as scary or difficult to understand as one might expect. So if you want to find more of me online, you can find me pretty much anywhere as code mouse 92. I'm pretty active on Twitter where I post development tips and puns. And you can find all the rest of my social media links as well as information about my books at code mouse 92.com. I also have links to my other talks. So thank you very much. Okay. Thanks Jason for your very clear talk on this complex question that is packaging. Now we have a few questions and fortunately we have time to answer them, I hope. So first question is I've seen SRC as well as package name slash for the source directory as for tests. I've seen both in the project root deer as well as under the source deer. Which one is the best practice? Well I have to put a huge thing right at the front here in order to properly answer this question and that is somebody else said well the presenter is very opinionated. Well actually every Python programmer who's ever packaged a project is very opinionated. So I'm no more than anybody else in chat. My stuff is based on the Python packaging authority. And so they're not really, they like to say they're not really an authority but they are the official work group that kind of keeps this whole thing working. So the idea is if you need a different approach, use a different approach. If you know why you need to do it differently, by all means, break with convention, do what you think you need to do. What I presented is literally for everybody who doesn't know what they're doing, which is most of us if we're totally honest, we guess it, we guess a lot of us. So it's not so much opinionated as if you don't know what to do then do this because this is what is recommended between the Python packaging authority of the documentation and the PEPs, which is where the whole thing about PyProject comes from. So with that said, the source versus package name, I actually used to use package name. I didn't bother with the source directory for many years and I thought it was a waste of time. But I found the source really is easy to work with unless you have a reason to do it the other way because it's, if you have more than one package or change the package name or whatever, it makes things a lot easier, a lot more resistant to breaking if you change directory name. So source just simplifies things in general. It really isn't much extra work and you get all the benefits. As to the thing with the tests, I don't think it matters. Like I mentioned in the talk, I prefer out of place tests because I find them a little bit easier to set up, a little bit more obvious, but again, if you have a particular reason to do it the other way, do it the other way, if you don't know which one to do, do out of place, it's just going to be a lot more straightforward. Thank you. Another question we get a lot is what about the PyProject 2ML that came up recently? Yeah, that's been, see the thing is that that's actually standard in a PEP now. So that is where setup.cfg never was. So PyProject is actually something that has a PEP behind it to normalize it. I do understand too what someone mentioned, well, you know, Tom wasn't necessarily installed well neither or any of your other project dependencies, I might point out. The thing is that it is becoming more standard. So the idea is to have one file instead of 20 files or three files for your configuration. So again, if you don't know what to do, lean towards PyProject.toml mainly because it does have a standard behind it. And so there's a particular way of doing things and it's gaining that adoption. There's always, this is Python, we always have people that want to do things the old-fashioned way or the previous way or whatever or they won't. Well, I prefer how long it took us to go from Python 2 to Python 3. So there's always going to be some resistance and there's always going to be some reason for the objections. But again, if you don't know what you're doing, it's just like with Python 2 to Python 3. If you don't know which one to use, use Python 3. So if you don't know which one to use, use PyProject.toml because it is definitely the direction things are moving in regardless of how people feel about it right now. All right, thanks. I think a bit more specific question, recommendation on packaging for containerization. If I'm going to go ahead and have to package for containerization before. But you know, I will say that from what I did research, so this is now secondhand information, but based on my research, it looks like, you know, PyPy is really useful if you needed to play into a container. You know, just being able to do the PyP install is nice. So you know, that works. But I can already hear the shouting voices of a bunch of people watching this video now and in the future going. But this is better if you have a better tool, if you have something else in mind already that you know is going to work with containerization.
Structuring a Python project is often non-trivial. We pick up pieces of different patterns and techniques, blindly applying them without understanding their implications, in an attempt to ship software. Testing and packaging become significant pain points for many developers, and this need not be so. In this talk, Jason C. McDonald breaks down the best way to structure a Python project for maximum portability and maintainability...and more important, explain WHY these patterns exist.
10.5446/14105 (DOI)
Hi, I'm Daniel Black. I am presenting here from Sunnywell in the daytime camera. It's the middle of summer here and it's nice time to be here at FOSTIM remotely. So I'm here to talk about MariaDB platform as a service using SystemD. This has got a place to my heart because this is actually one of the first larger bits of work that I actually started work on at MariaDB while I was doing database consulting and not really getting paid to do development. But it was a fun way to start and ended up where I am today because of it. So if we look at how this relates to the cloud stream that we're in, we might consider that like platform as a service is like this historic thing. In fact, the open source platform as a service, it's listed here as like nine years old, 10 years old on that. So what we're here to show is, you know, some of the concepts that apply to cloud native are also applicable in the open source platform as a service model. And just another way to do it. If we look at what cloud native actually aims to achieve, and this is from the CNCF definition, is loosely coupled systems on resilience. It's about having services that are manageable and about being able to observe, you know, individual services and also the robust automation of this. And we'll go back to this later and see how well this system D implementation maps up to those. Many in the past and it's still available as an offering. In fact, Cpanel still does it for its users is provide me to be on shared hosting. And it's a trade off. It's really good for simple input installations. It's got database and privileges and user separation to ensure that at a base level, you the privacy of each users is actually protected. However, it's got some weaknesses that it's a single MariaDB process running. It's setting sorry, the global or session. There's no per database or per user setting for various things unless you want to change the settings on each session. MariaDB has a number of shared items, i.e. the table case, in a DB threads, in a big DB history, all the the f sinks that sort of happen to ensure that your ability of those are kind of shared items in the back ends. And if you're ever on a shared hosting with an evil person, they've got the potential to disrupt their other users. So it's a cheap simple option and it for the most part serves as purpose. If we look at how systems, these story in MariaDB started off, it started back in like 2015, when I read like the typicals notify changes into MariaDB to ensure it starts up in a cooperative manner with system D. For a couple years afterwards, there was compatibility packaging problems. Every one in one and CISD at the same time. And the interoperability between those wasn't great. Finally, that got resolved after a lot of displeasure. But I think that was a general sign of the times anyway. In 2017, at the end of that, I finally got committed into MariaDB in 10.4 series, the flexible MariaDB multi service that I'll show you one example of today. At the same time that was going on and realized that there were some changes actually needed in system D itself. And part of the problem with MariaDB is that as a database service, when it's starting up, it can take a while if it kind of got killed off hard to roll back all the transactions that weren't actually committed. And that delay is rather unpredictable. So the 90 second system D default that was at the time could quickly get exceeded. And it's sort of not up to the package to try to guess as to how long that is. In fact, even like the sys admins of such services would probably get surprised if the service didn't actually start up in that time. But what they'd be more surprised is that it was that far into processing and then got killed off, which is not what we want to happen. And so into system D, we added a bit, I added a bit that sort of enables the typicals notify message services to communicate to the system D to say, you know, hold off, I'm still going, I'm still going, I'm getting closer towards an outcome on that. And so that cooperative approaches been there for quite a while. If that somehow fails, the file back in Maria DB and other days, database service is to have a sense to kill equals no setting. And that's because you generally don't want your databases to hard file, because that just means they're going to spend longer on on recovering and rolling back stuff and they start up. What this meant was that you can actually have system D that's actually forgotten about a Maria DB service because it's past its time to time out. And there was like an option that says don't kill it. And at which point, another service can actually start up. And it would be bad if that actually occurred at the same time as the other ones trying to clean up itself before getting out. So we prevent that. This mechanism within Maria DB that sort of prevent that kind of race condition. But let's not present CIS admins with two processes that they don't know which one to kill off. And this kind of concurrency and block prevention to ensure there isn't multiple services applies on cloud services as well. So eventually we'll need to see how to interact those mechanisms with whatever the cloud mechanisms are for that. To run multiple instances of Maria DB at the same time, we need obviously a separate data directory. And we need a unique TCP socket and or a unique unique socket at the same time. If we look at the system D multi instance implementation that got merged into 10.4, there is an initialization phase in the exec start pre and an execution phase, which is when it actually starts a service. Both of these have an environment variable my school D multi instance. And both these programs actually take the light to the same arguments. So it works quite well that way. So let's get on with this demo. So first of all, we're starting with a Fedora 32 and we've got Maria DB installed from the Maria DB rebases page that the district wants. Now login deaths were actually manually altered. I've put a minimum UUID of 10,000. And we've done that because we're going to use that as the port number. Also noted at the bottom, I've got create home set to true on that just for simplicity. So in addition to creating users, what we're going to be doing is ensuring that each user has their own configuration file for Maria DB. And we've created a little template here. And we've made the user the port and the user part of the template. And this will get populated by the system, the overlay script here. So this is the extension of the default Maria DB service definition. The default server destination is a group by extension. And here we're going to group by the username. So we've set the user to percent high, which is the name of the service which you'll see soon. We've done protect home as false to work past a previous security mechanism. As before, we've got my security multi instance as the environment variable that controls the installation and execution on both of those. We start and exit pre start pre list. And this blank line isn't actually a mistake. It's to reset the system to service level back to default rather than extending the existing one in the definition. Our first line obviously checks if the file exists. And if not, it creates a file from the template. It creates a data directory. And it does Maria DB install with the default options on the data directory. It also does creates the data directory with an authentication of type of socket. And what that means is only the UNIX user of the name is actually able to access this instance. So we're protected against break users accessing each other's databases on the start, even without a password. And of course the red user becomes the username of the service. We've got restart always at the bottom basically so the user can quit out and re initialize their debug database service if they show just Xire and I'll demonstrate this soon. So now that we've got that configuration file we tell system data reloaded and then we can create our users. We add you know top Mary, Jan and Fran. And then we start the services for each of those users. Here's where the at percent I comes in for Tom, Mary and Fran. It's just mapped across that way. We have a look. We can see that we've got for former ADB instance is running. And if we look at Tom's home directory we see the data directory populated and also a my.cf file there. If we look at that for Fran we can see that it's up and running and that the service file is started and the last bits of the logs for the service. If we look at the journal control dash U, ready to be at Fran we can see the entire installation logs from earlier. If we change to the user Fran and connect to the database as Fran. Oh sorry we see that ID is 10,0003. So that's the port number that we were connecting on. If we select current user here we see we connected as Fran by default. And if we show grants we see that we've got all privileges on the database. The default super user is Fran. It's got a most related authentication but it's got their password set and the alternative is the Unix socket which we've actually connected on. So user could actually set down a password. As we said before that because they've got all privileges they can shut down the database. And if we look at the up time it's disconnected and we look at again and it's restarted after 5 seconds which is the default for that just to avoid storms or free starts if things go wrong. So that covered the normal system D service frameworks and so what I want to actually talk about now is the SOC over activation which is now got this fancy new word called serverless associated with it. And so we achieve this with a one line file i.e. this one that sort of says there's a abstract Unix socket to listen on and that's all that's required. And so now instead of starting read your B at tom.service we start the read your B at username.socket. And so what this socket does it's automatically there and listening as you'd expect it to do. But at this stage the service isn't actually started. If we look at system D on that now SOF we see that system D is actually listening to these sockets and when it gets a connection on one of those it will actually launch the service. And it launches service by not actually accepting the connection but by passing the file the script to where it was listening to the service process on there and in when Maria DB is up and ready it will actually start the connection. So we see this when we connect on say one of these sockets we'll see the read your B at tom. Socket service sorry socket is now status active and running as opposed to active and listening which it was before. And if we look the service status below we see that it started up the service and the mapping between those is purely based on the overlapping name between the multi instance socket and the multi instance service. So that's why we didn't actually need to change the service at all. So this is the work in progress as I said the socket is actually passed into the service and that means we've got to do a few funky things to say well we don't open and bind to a socket we just take the socket folder script as is and continue on. And so I've picked up my work that I began five years ago and continue on and we'll get that merged into 10.6. So if we look at system D against the multi instance versions that were discussed here against the cloud requirements we see that we've likely got a loosely coupled service. They're independent services in that they're running on their own user. There's the same binary and the same libraries for all of the users and the same OS. So in that sense it's not as loosely coupled as the cloud container will but it's definitely simpler in that way. We've got resilience in that each process could kill or be died off in a separate way and we've got the ability to apply controls over each service. For instance you'd probably run a memory limit for each of the services to ensure the users don't exceed those and you can do disc quotas and IOT quotas and I forgot to mention all the system D services are running in their own C group so we've got a huge range of controls available now for setting those and that can just be part of your system D template exposes most of those anyway. On resilience we've still got the hardware in the OS as a single point of failure but we've got independent services. On manageability we can sort of create users and start a service exceptionally easy on those so it's sort of ready to be automated that way. If we look at observability we've got each user actually generating their own logs in that way and we've got distinct C groups for monitoring and resource control and we've got the ability to automate it. So have you got any questions?
Using baremetal and user level segregation, we can use systemd multi-instance to provide MariaDB as on demand PaaS, where every user gets their own configuration. This talk will show you what this looks like from a user and system maintainer perspective. With containers as all the rage, and the perceived default way of doing things, lets take a look at another approach of PaaS. Systemd as a service manager provides significant functionality on delivering multiple similar services in a managed way, so why not MariaDB PaaS? So with a few configuration changes from a default MariaDB install, let's show what a per user database instances looks like. Adding socket activation to mix and see "serverless" capability before it came fad with Kubernetes. On top of that, a brief look at abstract sockets that have been in MariaDB for while and what they could look like in a PaaS environment.
10.5446/53627 (DOI)
Hi everyone, my name is Jormel Fernández and I'm going to talk about conflict everywhere. Autocompose the configuration and secrets for a microservice environment, taking account, multiple variables, without dying anything. As a result, I want to talk about me. I am Tintan, I am an algorithm and data structure enthusiast and I am working as backend engineer since 2017. I am assisting to the first event since 2018. My contributions are mainly related with the database field. And you can see things and I think that was important in my blog and you can see me in Vithag like on the list. The first of talk, I want to talk about what I think and if it's a dynamic configuration and some things about it. I see some posts in the Twitter blog and have a very nice definition, like a dynamic configuration, stability to change the behavior and functionality of a running system without requiring application restarts. The old dynamic configuration systems, name service developers, and administration to view and update configuration issues. And deliver configuration days to the application efficiently and rarely. And I have with the pandemic sometimes, so I want to start this experiment that I define as a fixing way to compose and obtain configuration issues. And since Ferturist has atomic, one if you change some variables, you know if it is updated and if you date it in one size, you have another. For example, you update your configuration that are the changes are delivered and the same thing. All or none. The second one is level visit. I think that it is a nice feature with the change, for example, with Grafana to Prometheus and it's have some rates of freedom and I think that could be a nice feature. Another thing is a low reference and overrides to avoid duplicate the code, in this case the configuration. Secure to avoid manage secrets with it. So you can have the secrets and the config in the same system. Another thing is traceable and not only has developed but has administration. We want to know the config was updated is some service or data service is watching or is obtaining a configuration time it's located or something like that. And the last one is efficiently. We want to do this smoothly fast and with a minimum number of results. For data I want to do something simple about how this image config work. The philosophy if you want to ask for the configuration and database. We allow the application sessions and the product environment. The data center and the with the secrets of production. And you define some configuration for the same for the database configuration for all the service. Do you want to have a database prefix and the database name and the user is the same. Some prefix and some use. Also you want for the user for the application sessions have the database name sessions and the user sessions. In the environment production the prefix will be production. And for the secrets of production of the application of sessions the environment the production and the data set of Belgium. You have to have some random password. And the last one for the environment of production of Belgium you have some hot database calls. Who's writer and reader on DCP. So the expected config will be to the database to connect will be the production sessions with the user production sessions. The password defined and the writers. This will be mainly the return and configuration and the behavior of the programs. And at the scale. After it I want to talk about all these words and some factors of it. The first one is to use a directory that can be used with another operation or something like that. With the folders and the directory names and a special meaning. Finally, this next rules. The first world and a file and directory started with a dot is ignored. The second one is the path specified the document levels and we have two formats. The first one is the compact format that is the thing in which k equal value. Where the key is the key of the label and value is his will. In other case, it follow the skin key to follow their value for the first level specifically the key in the second one, the spally. Another thing is a file starting with a underscore is a special file. You have to kind of special files at least at the moment. The first one is the text special file values. The it's used the text of the file as the document. Another one is a beam for bin other files is a low new define a secret. Copying only the file to use has a secret the following with this prefix. At the end any section any file ending with the general section is recognized as a confirmation file. And the document name is the string from the given into the last dot. For example, the file that a base.yaml is the document database. And another files are ignored. They are seen as the levels. The levels are different some characteristics of the configures to be composed. For example, is the type of environment, the name of the service, the type of the secrets and so on the other rates that you could be necessary for you guys. To calculate the overrides and the merge operation is follow the next algorithm. The first one is obtained on the subset of the files that have a subset of the requested levels. This configuration are often linked with the number of levels from less to more explicit issues. In the case of a series model that a new document with the same number of levels it ordered it with a list of weights. These weights are defined in a content stored in the document MHB.yaml. This document is stored on the root of the configuration. There are still some details. In this case the yaml tags. Yaml is specified as some custom tags to facilitate the development. In this case you have the format tag, the load-do-stream interpolation with a reference of the config. Where the contents of the configs are obtained, for now value, put inside the brackets. It's a nice last to separate the key builds. To Skype this bracket you can use the doubler bracket, like the Python format. And so an example will be this. In this case you want to obtain the domain happyweb value and put it in this string. This will obtain the value happyweb from the document domains. The next one is ref, so insert the configuration of another files in the place defined. The requirement is that it don't exist as circular dependency, because the configuration will fail. Another example will be the ref database schema, so when ref lists the valid schema of the document database. In the case that you want to put a value in the same of the same document, because it is a circular dependency, you can use the ref. That is only worse with the scalars and new values. In this case the ref message, with reference with the value measure of the self document. In the case of you want to remove some previous definite element, you can use the delete tag. It will remove the value with the less precedents. And by default have the values are marked. You want to override some value with another value you need to put the override tag. By default the dates are marked, keeping the key in the case of duplicates of the value in the last value. The one with the more priority and in the case of this list, the lists are appended. And the last one has the string binary in front and volume values. That is the finite, the general standard. To keep secrets, it has some asiel that allow tokens to auto-claim necessary. This is in another special directory for the configuration folder. And in this route we have the tokens.jml file. This journal with the next structure will be looked at the start of the rackets. In the case if the token is defined, we use and compound the asiel configuration until it has another configuration. So in the route you can use the label kinder route. With the label you can use the king developer and the environment of depth. And in the case of some application number calendar in the environment of production, you can have this label. In this case the configuration label will be composed in the same way than another label. Once defined this, we take a look in the policies. That is a way to restrict access to the results. From the capability, the route path and the lovers. In this case it will be disformed. In this case this policy will allow the head watch and trace capabilities. In all the route paths that start with mode that I make config. And the last two path is to use. And with the capability of head watch and trace. And a load with the levels. By default is the last line. Also for the key test we want only to have the capability of head and watch. And also in the case of the key test and the value need you have this capability. In this case the plus character define any single path segment. And the chapter will card denote other naming man of path semaphones. But it only can be used at the another. This will be obtained with a RPC API. And this API allow a specify, obtain a specify configuration of the version of the document of the last one. The cluster to head and the last configuration version of the document. And obtain all the new chains of that document. Notify the diamonds that some file has been updated. And trace the action of the service according to some factors. The case you want to know if some watcher is using some file. Or what value is using some app is this. Also to use it you can use the next clients. In the repo we have a Python script to notify the chains and configuration fans to notify. You can use have a pic cool or if you like you can write and say all your time. Also in the repo is a proof of concept Python wrapper but with my lines change it's broken. So you can take a look on it but I don't set that it works. And it is sold. So I want to have some demo with it. In this case we start the diamond and we start the client to update the chains. I created a directory with the service sample. In this case we have a database and then it's complete the path. And the last secrets with the data set on vacuum. So sessions app and also with a calendar app and also the last values. You have these things. The database you want to format the preface with the database name. It used to be the same way. In the case of the calendar for the secrets we can use some password referring to the document text. This is the password of case. And it is a binary password for the sessions app introduction. The others that we want to have in this case have less priority and the secrets with the most one. The address host for the database in the vacuum data center and production. And some programs for the calendar application and the session one. And also the preface for the plot. So in this case we can use the sessions and we obtain the expected results. The database is the production sessions. We have the password, the host for the reader and the writer. And I'll show the user. In this case you can change it to use the calendar app. And we have accepted the same machine against the database with calendar, the password and the user. This too for the header request. But you can use another like a trace. Do not what happens. In this case we can use a trace. For example, if we take a look on the calendar app. We now have some one asking for the database of the namespace. This as for the document version one. With these levels and returning the database document. Another thing that we can use is watch for a document change. In this case we will use a for the calendar for this environment. In this case we can see that we added a watcher with these values and it obtains some config with the version one and this one. And in this case it will update the value. For example, then change the set. And we will update it. We will activate a new document with the version two. With the new values. Then the only change in the past. And in the case of the watcher we now that it's in a new two value with the same. And if we close the watcher. We have notified with the remote watcher. And we will update this database for provision. And another thing that I want to improve. The first one is greater configuration for the demo. Restart the number of threads, for a potential etc. There are some clients for example with I write only the write on one but it is a perfect concept and I think that is better. Do something. Another thing that I have in mind is greater data for example to account the commits not only the notified and if they declare once all the commit are applied. Then you can ensure that if you commit a commit you can have it all or none in the configuration. Another thing is investigate issues only has a standard tool only has a dynamic configuration tool and also take a look for integration for the same for the other thing I want to improve the private information for the records in person information about the service or with it you need it with service of dated. Also, update as for multiple documents in one record and the same times, we will replicate the return of configuration of the watch command of the head one. The first one is on localization both and clintons. More or less it is solved. If you have any questions. Okay. Okay. So, the first question we had was the question about, or let's say question was saying we already have many conflict tools. Can you tell a bit about what is the difference between image config and config tools in general? Well, the main approach that I have in mind writing this tool is because of the existing tool. I'm focusing in programming the general files per se with some template engine or something like that. I think that it is fine and it has a lot of power, but I don't think it is a good thing. Because at the end you want only to have some data and change it. You don't want to program in general files or something like that. So, it is mainly one of the approach. And also because the book part is also obscuring other tools. You don't know exactly how the mesh was. If you want to check why this war in this part, but it is a bit difficult. And the last one is the mesh operation. In the other tools, you need to specify what dependencies you have. If you want to open another file to inject some variables, you need to define it in some order. And it seems that a level based approach is more simple and easy to manage. Okay, thank you. Okay. Let's see if there are some more questions coming. Any question? I don't know if you want to know something about how this was in some place or... Okay, from my side, when did you come up with the idea? Like was there a specific tool or project where it was the most upsetting and you thought, okay, I need to do something about this? Well, it started with a proper concept to provide some in C++ and how to have a simpler approach with other tools I don't have in mind, so specific. It's interesting that there was some studies about how different tools parse the configuration and the bug pops they trigger and I can imagine that your tool helps or library. I don't know what is even the right term for it. Should I say library? Should I say tool? Should I say... Well, for the moment, maybe I have some time I want to start this on a library to do some RPC request or something like that. For the moment, this is a tool service. I don't know how to say it properly. Okay, there's one question from Marco. If you know the Terracia project. Okay. The rocket project, no, I don't think I can see it. It's based on the link now. It says the data lookup system. So probably similar. Maybe I don't know if it also has some demon or something. I'm not interested in it. I will take a look at it. But no, for the moment, I don't know it. Okay, last questions. If there are anyone? Okay.
How to compose the configuration and secrets of microservices taking into account various variables without dying in the attempt. In a micro service based architecture it's common to have to manage configurations and secrets based on different variables, such as the microservice, flavors of the microservice, the execution environment and data center, etc. Since it is an arduous task both in the time needed to add a new variable and to refactor existing ones I have created MHConfig. MHConfig is a microservice in charge of composing configurations and secrets according to different labels and notifying the clients who use it when a change has occurred, providing various features to facilitate this work as references to recurrent or global values, traceability of the clients using a configuration, access control lists to prevent obtain to other microservice configurations/secrets, etc. In this session I'm going to talk about how the tool works and how it can help solve common problems when managing configurations and secrets in a complex environment.
10.5446/53629 (DOI)
Hello everyone, my name is Dany Tirishinka and I work at Redhead as a part of Pulp Team. And over the past year I've been focusing on the migration between Pulp2 and Pulp3. And this is what I'm going to talk about today. I will work under assumption that you're familiar with Pulp2 at least, otherwise the migration question is likely not relevant to you. Alright, let's get started. First I would like to quickly touch on why to migrate at all, just in case you have any doubts. I believe there are many reasons. For one, Pulp3 has a lot of new features. There are more content plugins, I believe it's easier to work with. But also Pulp2 has quite old technology stack and some components either already have reached or approaching their end of life. To give you an example, a big one would be Python 2. Another would be MongoDB and it's a 3.6 version which I believe is the last under the GPL license is approaching its end of life in the next few months. And most importantly, Pulp2 itself will stop being supported next year. So please do migrate. I believe it's in your interest. So now let's look at how to achieve that. And the first option is very straightforward, just manual. A fresh Pulp3 install and you just create all the repositories you need, configure them and synchronize content either from your Pulp2 instance or from some remote repositories. Or maybe you can even upload the content you have. It might be suitable for you if you have a small and relatively simple setup. But with scale, it's not that easy. So that's the reason we wrote the migration plugin for you. All right, let's take a closer look at that. The migration plugin itself is called the package Pulp2.2.3 migration and it's a standard Pulp3 plugin. It's a Django app. You can use it. You can install it using our installer. Everything is standard. You just need a little bit extra configuration to connect to MongoDB. I'll show it later. It currently requires Pulp3.7 on your and the latest Pulp2. As for the content which is supported, you can migrate your ISO Docker and RPM content and also Debian is on its way. It's partially supported already and our community is working on it. It's an active development ready soon. So what do you need to run the migration? Of course, you need Pulp3 itself and all the content plugins you are willing to migrate to. If you want to migrate ISO content, you would need Pulp3 file plugin and so on. It's not important if Pulp3 is on the same machine or on the different one. The key is to have access to Pulp2 database and to Pulp2 file system where the content resides. Now to run the migration. First of all, you can migrate one or more plugins at a time. You need to create a migration plan. And there are some options. If you just willing to migrate everything for a specific plugin, that's the easiest use case and you can create a simple plan. It's very short, very simple. I'll show it to you. And you don't need anything else. On the other hand, if you would like to have more control over what you migrate. Or you prefer to change some combination of repositories and portals, distributors, or you would like to migrate a subset of repositories. For that, you would need to create a complex detailed plan. And so let's take a look at that. I switch to terminal. Alright, first, I'll show you the settings. So you're familiar how the configuration looks. So this is what you would need to add to your Pulp3 settings. And it's basically a copy paste of your server.config in Pulp2 related to connection to Mongo. And that's it. Alright, we talked that we have a simple and complex plan. So the simple plan would look like this. It's indeed really simple. Here I have only one plugin, but there have been two or three. And you specify a name for the plugin. It's a Pulp2 name. And this means migrate everything for my ISO. Alright, now if we take a look at the complex one, here how it looks. You still need to specify the type of the plugin. But now for each repository you want to create in Pulp3, you need to specify all the details. And we'll go through them here. So let's say I would like to create a Pulp3 repository with that name. I would like to migrate importer, this one. And because Pulp3 has repository versions, and there could be many here, I have one, you can specify that this Pulp2 repository migrates into this repository version. And I would like to serve the content of this repository using information from this distributor and from this distributor. So the same content will be served twice. Just an example, if for whatever reason you would like to do that, let's say it will be def and this will be test. Alright, so this kind of configuration you need to specify for every single Pulp3 repository. And here below I have more, and it's just migrating as is one to one. Everything related to this Pulp2 repository, the same in Pulp3. Alright, to create this migration plan, you just need to send a post request to the migration plans endpoint and specify your plan in the JSON format. That's it. Okay, let's run it. So to run it, it's still a post request to this specific migration plan. And we specify in action here, run, because there are more options. Okay, it gives us back task. And if we look at the task, there are some details here, progress report and some more details including the task group. And we'll get to this soon. Alright, so for now that's it for the demo. Let's move back to the slides. Now I would like to tell you a little bit about the details and behind the scenes what happens there to better understand the process. First step is what we call pre-migration. In order not to overload Pulp2 with many different complicated requests and searches, we just take all Pulp2 data we need for migration and put it into a post-gres database. So this is the pre-migration step. After that, the migration itself starts. There is content migration and there is a migration of other resources like repositories, importers, distributors and what not. That's it at the very, very high level. Then what else we do within migrations? For content migration, we try to use hard links when possible. So if your file system supports it, if it's possible to create hard links between Pulp3 storage and Pulp2 storage, then we'll create them not to require extra space for your content. If it's not possible, the content will be copied over. Then repository creation and other resources. This part happens in parallel and it runs in multiple tasks. So potentially you can control how many resources are used for that and how fast it goes by changing the number of Pulp3 workers. Actually, let me show you now our progress report. If you look, you can see different steps here and you can notice the pre-migrating content, different parts of it, then creating repositories and migrating. So you clearly can see those pre-migration and migration steps and mostly when it comes to content. And then we also have task group, which I pointed to last time. And if we look at that, we can see that within for this task group, all tasks have been dispatched and you can see also statuses of those tasks. One of them have been canceled, failed, or waiting or running, four completed. And also you can see what those tasks did. They created three repository versions and they created four distributions. Exactly what we asked for. So looking at the progress report here plus the task group will tell you all the details about migration and also you'll see it's completed. The important aspect is that we understand that content migration, especially content migration, can take a lot of time, especially if you need to copy it and if your system is really big. For that reason, we allow migration to be run while pulp2 is on and you can rerun it as many times as needed. Every rerun works more or less with the only with the difference with what changed in pulp2 and migrates only those parts. So reruns in case there were not so many changes should be much, much faster. That's why the workflow which we suggest is to rerun and rerun migration as many times as you need while pulp2 is working. And when you're ready to switch to pulp3, you shut down pulp2 services and run migration for the last time. And if there were no too little changes, it should be relatively fast. To give you an idea of the timing, one system which I tested, it has about 300,000 RPM packages and about 1,000 repositories. So we call it medium-size installation. And it took about 14, 15 hours to migrate content. But the rerun, without when there were no or very little changes, the rerun took about 10 to 15 minutes. Of course, your timing may differ, depends on your machine and many other things. But just to give you an idea, that difference between those two can be very significant. All right. That was the main workflow. We have some additional features for the migration plugin. And one is mapping between pulp2 and pulp3 resources. And some of them you can figure it out without this mapping. And some will be quite hard. But in general, I think it's really useful to have it. And let me share it with you. So I'm looking at it in my browser. The endpoint is pulp2 repositories. And if you look at it, we asked to create three pulp3 repositories and we migrated three pulp2 repositories. So there are three items here. So it says that this pulp2 repo ID, this pulp2 repo, migrated into this repository version, you expected to use this remote to sync into it. There is a publication for the content of this repository version. And it's distributed twice using those distributions. And if we look into distribution, we can see that one is indeed dev. And another one is the friend test. All right. So this is the mapping and this is how you can see it. Another mapping, we have for content. And you might not need it at all. But it's worth having it in mind that we have it. And you can take a look at it in case you have some concerns or you are not sure what migrated into what. So it provides here you like pulp2 storage path or pulp2 ID and the pulp3 content it migrated into. All right. Talked about that. Yes, this one as well. In addition, we have a feature of validating your migration plan. In case you create a complex plan and it's a big one, you probably generate it using some script or maybe creating it manually. There could be some mistakes and that's why we're checking for pulp2 resources if you specify existing ones or not. It's optional. So it's your choice whether to use it or not. Another feature is there is a possibility to reset pulp3 data like pre-migrated and migrated data for a plugin. As I mentioned before, when you rerun a migration for a plugin, it does those incremental reruns and updates. And in case you would like to start migration from scratch, you would need to reset it. And there are simpler ways. You can just drop the whole pulp3 database if you prefer. But there are situations when you can't do that. Maybe you don't have enough permissions to perform such operations or maybe you have already one perfectly working pulp3 plugin and you just want to reset and migrate different one and you don't want to touch the data for the plugin which is working. So you have an ability to do that. Another one is in case something is off with your pulp2 content and you don't really want to fix it, it's something all of them you're not interested in moving it over and it creates problems during migration. You can specify it, just keep some corrupted content and that would be it. The last button on the list is the ability to remove the migration plugin and all its data after you switch to pulp3. It's mostly for the sake of saving some space. And this feature is not implemented yet, but we definitely need to deliver it quite soon. So it will be there. All right. Now speaking a little bit more about the storage requirements, as I mentioned before, if there is hard link support then you don't need anything extra for your content. At the same time you would need a bit of extra space for repository metadata because repository metadata is not migrated as is and pulp3 regionerates it. Again to give you some estimate numbers, what can be required, the REL7 repository metadata is about 100 megabytes and for this installation I mentioned before with 300,000 RPMs and 1000 RPMs it required an extra 10 gigs. Of course your numbers probably will be different and it hugely depends on the plugins and its content but again just to give you an idea. As for the database, while you are running migration and you haven't switched to pulp3 your postgres would need 70 to 80% of the pulp2 MongoDB. And it's because of the pre-migrated data we bring in a lot of data from pulp2 to work with so it takes a noticeable amount of space. But after you're done and you switch to pulp3 and you remove migration plugin and its data for the same content and repository your pulp3 setup for postgres should take no more than 20 to 30% of what MongoDB is to have. As an example I think I had a pulp2 database which used 30 gigabytes so during migration time my postgres required up to 20 to 24 gigabytes and after removing everything it will be under 10. Alright and I would like also to tell you just to give a full picture about what we are not migrating. So you are aware and see if it creates any problems for you. So we are not migrating global settings and those I think you could specify them in JSON files on the file system for importers and distributors. They are not migrated. Everything related to role-based access control, users, passwords, permissions, this part is not migrated. If you use protected repositories in pulp2 repositories themselves and content is migrated but the configuration for protection is not. So that you would need to create what we call cert guards in pulp3. Also pulp3 is quite different from pulp2 and that's why some configuration options or even features they no longer make any sense in pulp3 world or we decided to drop some features like consumer management. So obviously those things are not migrated. I think that's it. I believe we migrate the rest but definitely let us know if there is something that you lack in the migration plugin or something is blocking you and we'll see what we can do. Alright, here's a reminder of how to reach out to us. We are on IRC FreeNode on metrics in the following channels and here are our mailing lists as well. We are of course on GitHub and at the very top you can see our website. So thank you very much for your time. I really hope that you will find migration plugin useful and migrate to pulp3 soon. So talk to you at the conferences and over IRC. Thank you. Are we going live? Okay. Hey, I'm here with Tanya to answer some questions. I'm here with Tanya to answer some questions. I guess we're very short on time. So in any case, definitely reach out to us at our pulp stand or IRC at our channels. There was a question about Python 3.7. I think you misread that.
Pulp helps you fetch, upload, organize, and distribute software packages. With Pulp 2 approaching EOL and Pulp 3 being more stable than before, we strongly encourage you to move to Pulp 3. It might be a big deal if you have a large setup and a lot of carefully curated content and repositories. To make it easy for you, we'd like to introduce a plugin which allows you to migrate from Pulp 2 to Pulp 3 smoothly and without recreating everything from scratch. In this talk, we'll share: - how we are doing the migration - what are the benefits - when you want to use it - how it goes (demo) The target audience is Pulp 2 users.
10.5446/53631 (DOI)
Hello. I'd like to talk to you today about automating Terraform. I've been working in automating some type of development pipeline for over a decade and only actually been the past three years that I've started to see automation and infrastructure as code. This is something that I find a bit strange. I think we really need to push forward and get things under the same pattern that we are pushing all our development towards. This goes to a talk that was had last year by Chris... Sorry, Chris. It was a talk last year where he got into the ideas of how we got to this state, what the culture is behind doing things manually on the operation side and what we can do in our organizations and in our teams to push forward and get proper automation workflows in place. Today I'm going to talk a little bit more about the specific tools and techniques related to Terraform and getting a Terraform pipeline into your workflow. It starts with all the tools that exist for developers. There's plenty of CI and CD tooling out there that they follow. These tools can also be incorporated into the operation side. Just as we're enforcing developers to get code coverage and tests and static analysis done on their code before it gets deployed, we should be doing the same for our infrastructure code. It comes down to a basic workflow. We want to be able to commit the code, we want to push it, and we want to run like hell as our infrastructure pulls up. What we want to do is we want to push the code, we want to have some type of analysis on the code, we want to have a plan to see what's going to change, we want to then be able to have some type of evaluation and assertion that things are good, and then we can go ahead and either apply these and then merge or merge and apply. The order is how you choose the pattern to work for you. Which comes to the key point of my talk that I want to have is all of these tools that I'm going to cover, everything out there might not work for you. This is an opinionated world, opinionated ecosystem, opinionated, I have opinions. All these different strategies on how you're going to do them might not match what you currently have. I think the best way to get automation is to just by start by automating what you already do. As soon as you have the basic automations then you can build and improve and grow around it. Whereas the key is just to automate now. Start by something small. There's going to be tools like Atlantis that I'll talk a little bit more about later, but anything can do it. If you already have the tooling for your development flow, use that. Keep within the same ecosystem of your infrastructure as your development. Be able to then use the same thing so you can speak the same language with your developers as well. Now, there's going to be times when things are going to need to be manually done in Terraform. Terraform is a very tricky beast. There is instances where you're implementing new modules on infrastructure you already have in place and you're just trying to focus your attention there while the rest of the team is doing something else. There's going to be need for some type of manual intervention. This is just something to keep in mind while you're building out the automation of your systems. Focus on the automation, but don't forget that as you're growing, you're going to, particularly in the beginning, need this extra intervention of manual techniques. One of the things you're going to also need to make sure that's built in place into your Terraform structure is your state management. You need to make sure that it has the right access to do the things, but is also restricted in a good way to keep it secure. You want to also take advantage of the possible state providers out there. The big cloud providers give you buckets to store your state. There's plenty of paid solutions like Terraform Cloud and IBM. There's also in-house things that you can do to build a remote state for your current infrastructure. Now, once you get your state set up, what you want to do is focus on the variable management. I worked on a client that had some pretty small team, but what they were doing were passing around their TFR files in Slack because we had so many things going on and nobody knew what was actually happening. It was a real big mess. The first thing I wanted to do and got in place was to encrypt the TFR files in KMS with one of the cloud provider KMS tools. Then the code itself had the values encrypted in them and that could then be stored in our version control system. This technique needed a bit of scripting around to get things working easily for the development flow, but Terraform itself managed the decryption and encryption on the apply. There was no extra work on Terraform just around our Terraform workflow. If this isn't the right step, if you don't want to take that time to invest in that, you can also use CI values. Each CI tool has some type of environment variable and usually that gets encrypted as well. You can have that stored in. That would be where the Terraform apply happens and Terraform can then read these environment variables from the tooling itself. If you choose this technique of storing it in the CI, you're going to want to make sure that if you're still doing manual intervention, you have a way of getting things back and forth. The next focus of your whole ecosystem of what you apply is going to be focused on the locking. You can either lock or not. I come from the group of not locking the state file. Some state management solutions don't even allow locking, but that's a different topic of what your strategy is. I think of it as if the main branch I have is going to be the source of truth of what I apply. There's only going to be one change at a time that's coming into this main branch. As long as I make sure the applies happen in the time that they get applied, I know what the state is going to be and what it should be. Then I don't need to incorporate locking to prevent other things from going on while this is happening. That is my philosophy on the situation. Other tooling is built around a different technique where they do get locked and you then have to make sure that you do things in the order that they're designed or find the ways to work around the lock. All options, just things you need to consider before you get started. We've talked about the basics. Now for the first set of tooling I want to focus on, it's about the static analysis. This is about pushing your security left, pushing the quick fixes that are going to have big impacts closer to the development flow. It takes moments to get static analysis implemented. Adding a TFSEC to your repo really is just an execution of TFSEC. Then you get a whole list of possible vulnerabilities that you're going to expose into your ecosystem. It might take longer to filter out some false positives, but getting these tools in and executing again takes just moments for the results to come back. There's blog posts I've linked to the bottom that kind of delves deeper into each of these different tools and how they work. I've also linked to a reference in an article by Rapid7. They did a case study. They provide penetration testing. They found that of the penetration test they ran, 96% of the failures were caused by misconfigurations of the infrastructure. These kind of misconfigurations are what lead to security breaches in a bigger environment. You want to get this close to the development. You want to have these values that they provide. This information about your system happens in moments. I can't stress this enough how important this is. There's other bits of static analysis that you should be including as well. There is Terraform validate and format. These things come out of the box with Terraform. They evaluate to make sure that the variables are valid or that the code is structured in a readable way. As your team grows, they're able to then be on board and understand what's happening quicker. TF-Lintz functions in a little bit of a cross between these where it will validate not just that the variable is a valid string, but that it's going to align with the rules that are set in the cloud providers and meet the certain patterns and rules that get in place. As you're pushing things left, you can go even further left onto the developer's machine or the ops machine or whoever it is who's working on your Terraform code. You can include it in an IDE. If you're using VS Code, you have this plugin that automatically handles the formatting and the validating. If you're not using this IDE or there's other, you know, not choosing to use these plugins, you can still do pre-commit hooks and making sure that as the code is getting developed on the developer's machine, they're already getting the feedback on what the state is going to be as it goes forward. Going into some bigger tool sets, we want to first go into Terraform Cloud. This is a tool that HasherCorp provides and that HasherCorp being the maintainers and creators of Terraform. Now Terraform Cloud, it's not open source. However, it does provide a free option for small teams. I think up to five people it is free. The focus that Terraform Cloud does provide is about managing the state. What you have available is instead of just storing your state file in a bucket as a file, Terraform Cloud manages it in a much deeper way and provides more functionality around it. They build the whole ecosystem around the state management. What they also do is allow for remote executions of your Terraform. You can execute the Terraform on their machines or if you have certain restrictions on where you can execute it, you can use your own machines. Now the caveat with Terraform Cloud, the one thing you got to consider is that it doesn't provide a full ecosystem of proper workflow. What you want to do is take Terraform Cloud as just an extension to the CI tooling. You want to be able to still run these security scans and analysis and get better feedback into the PRs themselves. This isn't something Terraform Cloud provides, but it's something you can use and leverage in your CI tooling to get a good workflow. The other tool to get into that does come out of the box with a good workflow and it allows the extensibility for adding the security scanning into it is Atlantis. The one thing with Atlantis that you need to think about is that it does use the lock. You have one PR that becomes the focal point of the action. It executes a plan and it executes on the Atlantis system and then in Git you can actually write a comment to the PR to approve it and apply the Terraform and then it will execute again on the Atlantis system and merge the code for you. If you have another PR it has to wait until the first PR has gone through its execution phase or you can add comments into the PR itself to override the other lock and takeover. These are techniques that you can work around being locked in certain ways and take advantage of this workflow. With Atlantis you need to be aware that it is a managed solution. You have to take care of deploying Atlantis. They provide a helm chart that allows it to be deployed into Kubernetes easily. This is something that as your ecosystem grows and you are running into issues in your production environment you need to make sure that the environment where your Terraform is being executed is also highly available. You do have monitoring around it and you are paying attention that this is as critical as your applications are. One of the things Atlantis does is it takes the plan output and puts it into the PR. Here is an example of the output of a simple Terraform plan. What Atlantis will do is add some formatting to it to allow the markdown to show the colors. You add more visibility into your system. This is what it is all about. You have a change that the reviewer goes in and sees the value from 0 to 1. It looks good to me. When you have this plan output showing you quite visibly what is going to change you are able to address the concerns of what is going to happen in the environment that you are deploying to. This was so interesting that I have actually read a little script, TF-RV, that does the same thing if you are not using Atlantis. You have this ability to have this markdown available to you. TF-RV will also handle colors on your local environment to improve the visibility of these changes. As we switch from the bigger tooling to some other helpful tools in your Terraform environment, the next key component is Terra-Agrant. This is designed very well for scaling. As your project scales, as your code repository scales, you keep things clean and you are able to grow more without having to reproduce the same code everywhere. If you are starting off on a small project, which is the one module, you probably are not ready yet for needing Terra-Agrant. However, as that project is growing and you are becoming a larger entity, this is something that you want to incorporate into your Terraform early enough that it actually provides the value as you scale up. I couldn't be having this talk at all without talking about tests. We need some kind of level where the code gets in, gets evaluated, and tests happen. Introducing tests just is with your development applications. Adding tests to the code is something that takes a bit of a hurdle to start. You can't just write your code and the tests come with it, like the static analysis. In this case, you have to actually put the effort in to make these tests work. Now, that investment in the beginning to get tests pays off very well long term in adding confidence of what's happening. As your code lives longer, having tests that show that this is what I want in my state, this is how it's supposed to be. As people forget what that is, as time passes, you're able to still have these tests validate that the changes that are making are going to impact things in a negative way. This level of confidence is really critical to a component that's as critical as your infrastructure. There's two types of testing, one of which on the bottom are two tools that are unit test-focused. These take a look at the Terraform code itself or possibly the plan output and evaluate without actually doing anything in the real infrastructure and are focused more on the compliance rules that you set up in place. They match the policies that you have of what it should be. Now, those are really good to implement when costs is of the issue, but the real value comes from the end-to-end tests that first will actually execute the apply, then run some evaluation and make the assertions of their tests before then tearing down the environment. With these end-to-end tests, you need to make sure you have a dedicated environment that's different from your production and your development so that as things come and go, you're not impacting other work. You also need to consider that these are going to be longer and more expensive, both for development and time and price. Choosing the testing technique that you want again fits into what fits your workflow, but really having these tests is going to be critical to having confidence in your code-based long-term. Now, a tool that I've come across and I've never actually implemented into a system is local stack, which I think would be quite interesting if we can, instead of having the infrastructure in place, have this mock layer that the test can execute against and we can then assert based on this. I'd love to hear some comments about having tried local stack with some of the infrastructure testing tools. My last bit of tool talk is focused on the versioning. If you're locally running some Terraform, there's a high chance in this past year as the versions of Terraform have started to evolve much faster. There's going to be some times when you have one environment using one version and another a different and locally you want to be able to switch across this. If you are familiar, TF-N'd make this really easy. Now, your CI platform, the tool that you're using to actually execute the plans, this is something where you want to lock in your version. You don't want to be using a TF-N'd in your platform. You want to know which version, at which time, at which state that is used for this. That needs to be coded into the system that's doing the actual implementation. I also, in versioning talk, want to mention that if you're on 0.11 or something earlier, some of the tooling that I've brought up doesn't actually fit, doesn't work well out of the box. This shouldn't stop you from actually getting automation. You can still get the basics of your plan, your formatting, your analysis, and applying only after accepting the output of a plan. I've also put in here a link of this wiki I found, somebody maintaining a large list of Terraform tools. I'm not sure if it's fully inclusive of everything, but it was quite vast. If something I haven't mentioned in there, or you want to add something else to your system, you might find it listed there. I could have chosen to talk specifically today about just one tool, how it works, and how to implement it and do some real demos with the one tool. However, I think the value was to cover more of the aspects of what you can get in place. I don't really have time for a full demo of any particular tool. However, I did gather a few slides of what you can implement in this existing CI tool. GitHub is your version control system of choice, and you're choosing Terraform Cloud as your execution platform. There is this code, which I didn't actually include all of it, but in the link below, you can get information that HashiCorp is provided, and you can just copy paste this, and you immediately have a workflow that fits what is just basic for giving Terraform applied. If CircleCI is your choice, I've again truncated a block here, but CircleCI provides open source orbs, and these allow you to run your Terraform through the CI platform, and again, get your formatting, your validation, and everything you need to plan and then apply on approval. Lastly, I've included a bit about Jenkins. If you're in here using Jenkins, this is again a very basic pipeline, but Jenkins has the vast ability of being extended with Groovy, and so much more can be done, and at this link below, there is the ability to learn more about some of the different techniques. This isn't necessarily the only location, and just like all the other tools, you can find more about them on the internet. Wrapping up, I want to mention that visibility is the most important thing here, right? We're taking something that is happening on someone's machine and actually showing where it is, giving us an audit trail, a clear state of what happened, who did it, and when. This is actually quite critical in most systems that have some type of compliance and need to show what's going on in your system. Instead of asking, did you apply this? What did you apply? What was going to be the change when you applied it? Now, I've definitely seen some people who have felt that having a system that executes your Terraform apply is insecure, right? You have something that has access to do more than what anybody else can, and I find that it's actually more secure to have this one central place. Instead of distributing it out across different people with different rules that you have to then track, you have this one location where if something goes wrong, you have the point where it happened, you get the visibility of who's done it, when, and how. So again, from my opinion, this is actually a more secure process. Talking about security though, I've mentioned the ideas and the goal of being able to put the plan output in the PR, and if you're not on 014 of Terraform yet, the plan output is going to potentially contain values that are a little more secure and sensitive. So possibly that might be something to consider as you're going forward with your choices here. My name is Jeff Knurk. I work at Clever Tech, which is 100% remote tech consulting company, which of course is hiring. I am officially on social media, though I don't actually spend much time there. I would be happy to hear and discuss anything if you want to start a dialogue there, however, and I'm open for questions right now. So thank you very much.
If Continuous Integration/Continuous Delivery is critical for application deployment, why is it not being equally leveraged to manage infrastructure. Most teams I've seen have reached the first goal of defining their infrastructure configuration as code (often with Terraform), but they tend to stop there. Access and/or education on how to run Terraform is given to a few select people, and they then run it in the wild jungle that is there own machine. In this talk, Jeff with highlight some of the techniques, tools, quick wins, advanced features, and key measurements for putting your Terraform code through the pipes of automation.
10.5446/53632 (DOI)
Okay, welcome to the host your own on-premise Ansible Galaxy talk. My name is Brian Vauters and you can get all of this content, these commands that I'm using today at the slides which are hosted here. This is what we'll be doing today. We're going to set up an on-premise Galaxy. We're going to do that with a user interface and without a user interface. We're going to sync in collections and roles from galaxy.ansible.com. We're going to install collections and roles using the Ansible Galaxy CLI. We'll upload collections using the Ansible Galaxy CLI. The Ansible Galaxy CLI does not support uploading roles. Also we're going to organize roles and collections in various repositories. Let's look at the software that we're going to use today. The first is Galaxy NG. That stands for Next Generation. It's a fork and rewrite of the original Galaxy code. It has a user interface. It's developed by the same team that runs galaxy.ansible.com. It only supports collections. It does not support roles and it's GPLv2. We're also going to look at pulp Ansible. Pulp Ansible is the back end for Galaxy NG. You use it there as well. It has an API and a CLI. This is a management CLI for performing various actions with pulp Ansible. It has no user interface. It has collections and role support. It also supports roles. It is GPLv2. Let's go ahead and deploy Galaxy NG. This is an all container based situation. You can use either Docker or Podman. All of my commands are with Podman, but it all works with Docker as well. This is the container name that contains Galaxy NG. I've already pulled that on my system. You can run this on your system. The first thing you'll do after you have the container before you start it is to create a settings file and a couple of folders that the container will need. Let's go ahead and do that. This is where we'll be doing that. I'm going to first make the folders. This is what the container will use. Now I'm going to make that settings file in there. You can see the settings file. There's nothing really complicated about it. It's got just three settings in there. There are more settings you can set. You can see the documentation for them. This is what you need to just get running. Next we'll start the container. You can do this with or without SE Linux. It runs on port 8080. Right now it's HTTP only. We're going to work on fixing that. You can do this with Docker or Podman. The container name is Galaxy NG. Again, it's using this particular image. I'm going to use the SE Linux command option. This is the same command only without SE Linux. I'm going to use this one with SE Linux. Let's go ahead and start the container. The container started. You can see it running here. We're going to load some initial data. Galaxy NG requires you to load this initial data. There's also an Ansible installer to install this with if you don't want to use containers. That takes care of this step for you automatically. This is just some database fixture data that they want that their application expects. Let's go ahead and run these commands. This is going to install the fixture data into the container. It's loading that data into the database. These containers use Postgres. It's contained inside of the container. You can also have Postgres running outside of the container if that's what you want to do. This installed the fixture data into the database. That's just a required step to get going. The next thing that we're going to want to do is to set an admin password. To do this, we'll use this Podman and ZEC command. We're going to run this reset admin password command. The admin user is named admin. What we're going to do right now is give it a password. I'm just going to give it the password, password because it's a password. At this point, the container is running and set up. We can see it if we access this site here. Again, it's running on localhost port 8080. This is your login screen. Your users will use this. This is running from the container. I just gave it the password. This is what the site looks like when you first log into it without any data inside of it. I also, before we go too much further, I want to just point out you can use the Podman logs or Docker logs command. The container is called GalaxyNG. This is a good way to look at the logs. Let's go ahead and just get those up on the terminal here. I'm going to use this terminal to do so. I'm going to use this terminal to do so. It's just a running list of logs. Let's look around the user interface to check out this project, GalaxyNG. The first thing to know is that this fits your data populates database with some repositories. There is a community repository. Also, a repository is a set of collections or roles. It's like a bucket where you can keep some number of collections or roles in. The community repository is where content that is synced from galaxy.ansible.com lands. We're going to look at that. Published repository is content that you have uploaded. This might be private content. When I say content, I mean collections in this case because GalaxyNG only supports collections. I should say collections that you've uploaded either from the ansible galaxy CLI or through the web UI, which we'll show both of those. Then there's also the RH certified content. This is for content that you sync from cloud.redhat.com. Also there is this rejected and staging repositories. This is where collections can be staged for an approval and review process. I'll show that as well. Also here in the repo management area is this remote section. In here, you can configure what will be synced from either cloud.redhat from galaxy.ansible.com for the community content or cloud.redhat.com. We're only showing off the community content here, but the other one works roughly the same. Let's look at how we're going to specify what we want to sync. We'll do the repositories and the remotes. Let's configure a remote. Here's a simple text file with a requirements file. This is the same kind of a file you would hand to the ansible galaxy CLI to sync collections. You can hand that same data to this website and it will perform the syncing for you. It can sync over time, like every day or something like that, to constantly bring in new versions if they're published out there. We're going to take this. We're going to install the pulp installer, which is an ansible collection itself. It also needs the ansible.puzzix installer. This is going to bring in two collections here. Let's take that copy data and let's go over here to the edit screen. Here it's going to say, I need a requirements YAML. To do that, it will... This is contain basic.yaml. It's got that same text that I just showed you. We'll go ahead and say, hand it the requirements file there. We'll save that and then we'll go ahead and hit sync. This is the project running a background task, which is interacting with galaxy.ansible.com and bringing down collection content into galaxy ng. That task finished. What we have to do is we have to select... This pull down bar has the three different repositories that can contain content, kind of like buckets. This is going to be in the community bucket. You can see that it brought down these two pieces of content, just like we specified in the requirements file. Next, we can also use... I showed that there's this other... Let's say that we want to bring down the Amazon AWS collection. Well we can go to repo management and you could add it in there and just make your requirements file bigger to sync more and more. Or I can just give it the AWS one, which only has AWS in it and save that, sync that. That will run and that'll take some time. We'll come back to that. This does get to the gotchas. It does not provide dependency resolution, so your requirements file has to have everything in it that you need. Also every time you sync, it overwrites the previous collections that were there. Let's go and look at the logs, see how this is drawing. You can see the logs are reading lots of data from galaxy.ansible.com here. This should finish in a moment. We'll see this gotcha because now, so this completed, so now if we go to community area again, oh look, the other things are gone and now we only have the AWS collection. This is one of the gotchas that it doesn't, it overwrites the content that was there previously. You want to make sure to just have whatever you want synced in there. You can see various things about it. There isn't much that's readable in here, but other questions do have meaningful things. It did sync a lot of versions though. You can see that there's a lot of versions of this collection that were brought down. Then next, we're going to, oh, also do not have a netRC file because the galaxy CLI looks at that and that might cause you some problems. If you're wondering why stuff isn't working, do not use a netRC file when you're using Galaxy Engine. Let's go configure the CLI. The way that this works is you want to get an API token because you need a token to interact with Ansible Galaxy. We're going to take this token here. What we're going to do is we're going to make an ansible.cfg file here. I'll just put that token in there. Then we'll go back to Galaxy and G and we'll go to the repo management area. What we need to do is connect the CLI to a particular bucket. This content is in the community content. Let's go and we're going to take the CLI configuration area and we'll copy this to our cookboard. Then we will go put this here. You can see it says token, put your token here. We'll just put the token there. You can see it's pointing the CLI at the container and it's pointing it at a particular repository inside that container. That's configured. I am going to go back here and just sync down the other content. For the demo purposes, I actually wanted the other content here. I'm going to bring back that other content from the other requirements file. Here at the content is back. What we'll do now is after we configured the CLI, we have the token and the config in there. I will install it with a command like ansible galaxy collection install. I'm going to install it to the local directory. That's this dash p part. I'm going to tell it to install this collection. This is the collection that we sync down from Galaxy. Here you can see, if I install this, oh, actually I have a net RC file. This is why you shouldn't have a net RC file. That's why you don't have a net RC file. Cool. Now, you can see that it did receive the content, the collection, and it installed it right from my local system and not from the internet. You can see it installed here in the ansible collections. Next, we want to cover installing, let's see, uploading from the user interface. To do this, you'll want to have a collection. I'm going to use this collection. This is from actuallygalaxy.ansible.com. I've downloaded it as a tar ball. This is the collection I'm going to upload. This is private content that you would want to upload. You need to create a namespace for it in the UI. The namespace for this is NewswinderD. Let's go create that namespace in user interface. My namespace is, you can create this and say NewswinderD. After creating that namespace, you can then upload a collection. You can do that by just going up here to the upload collection button. You can, here's my tar ball with the collection. I'll upload this. You can see it goes through this import process, which actually validates and lends it and stuff like that. You'll see the logs here. This collection has some quality issues, but the point is that it imported correctly. Now, but you'll notice, this is in the published area, because it's stuff you've uploaded, but oh look, it's not here. That's because by default, Galaxy NG has an approval process. In the approval area, you will see collections that have been uploaded, but they need review. You can reject or approve. You can view the import logs. You can download it and evaluate it. This provides a review process for your organization to make sure that only quality or trusted content is being uploaded. I'm going to go ahead and approve this. There are no more approvals now, and you'll see that collection is now available in the published repository. If I hook the CLI up to the published repository, I would be able to now install to that. You can also turn off the approval workflow, and you can do this by putting this setting in your settings file and restarting the container. The docs are there. There are users groups and permissions in Galaxy NG. We're not going to go over, like, demo them that much, but you can create users, multiple users, and you can then organize them into groups which have various permissions, which is really useful. You can say things like, oh, edit. I'm going to add namespaces, change uploads namespaces, or collections. I'm going to be able to actually modify them, or I can change remotes, or I can't. This allows you to have different users who are able to add content and other users who are able to only consume content. That's very useful. Next we're going to demo uploading content from the CLI. This is the same as the install command, only use the publish command to do so. I'm not going to demo it because we already configured the CLI, and so you just use that same command to provide the publish. Now we're going to move on to pulp ansible. I'm going to stop the Galaxy NG container because they're using the same port. You could have them running together, but I just don't. You're going to pull down this container. This container is called pulp-fidora31. This has pulp ansible in it. It also has a bunch of other content types, like a Python registry and RPM support. It doesn't have Python registry. It has a Python API, like PyPy, and it has a container registry. It also contains pulp ansible, so I'm just conveniently using this. There's also a Sentos 8 version that's available. If you just wanted pulp ansible, you could build a container that just has it, or we can publish one for you if that's helpful. You would run podman or drpull and pull this container down. I already have done that, so let's do basically the same thing. This gets one extra setting, but let's go ahead and create the settings for this. I'm going to just remove the Galaxy NG folder because I don't need it anymore. Let's go ahead and create those settings. I'm going to make the directory. You can see it made similar folders there. I'm going to populate the settings file. There are the settings there. Now we're going to go ahead and start the container. Same thing. There's an SCLinux option. There's a non-SCLinux option. I'm going to use the SCLinux option here. Let's go ahead and run this. You can see I'm calling this container pulp because pulp ansible is in it, and it's using this tag name here, a container name there. Let's actually start the command for the container. The container started. That's great. Let's go look back here. We similarly need to assign it an admin password. Again, very similar process here. Let's assign that. I'm going to assign. There are some warnings. This has to do with compatibility for things for the future, but it's inspected. I'm going to give it password again. I think I didn't type the same password in, so it rejected it. The second time it set it correctly. I am actually going to, well, let's turn it on here. For this, we're going to use a CLI. This is going to let us configure remotes and repositories and do the things that you were doing through the user interface, only you're going to use the pulp CLI. First you'll pip install pulp CLI. I've already done that. It's over here. This terminal has the pulp CLI. You can see it's in a virtual amp here. Then you're going to want to point it at your container. This is in the config pulp settings.toml. There should be an L at the end of there. You can see that I already have that. I am pointing it out the same container. That configures the CLI. Also, you want a net RC here. This is the admin and password that I set because the pulp CLI uses your net RC. I just set the net RC there. That will allow it to authenticate. Let's do some basics. Podman logs pulp will show you the logs. That's here. Then we're going to use pulp status. This is going to show off just the normal health status. You can see there are online workers. You can see all the software that's installed in here. One of which is pulp ansible 061. It's ready to serve content. It's connected to Redis and Postgres. That's good. It also has a browsable API documentation. You can see this here. This is hosted from the container. It's at this pulp API v3 docs link. This has a lot of software into it. There's a lot of APIs here. This is a great way to see what all APIs are being offered here. For instance, we just looked at the status API. This is the API that drove that command that we just used. Next, those are just some basics. We're going to create an ansible repository. This is using the pulp CLI that we were just talking about. We're going to create an ansible repository. I just created a repository named my repo. Then we're going to create an ansible remote. This is a collection type remote because it's going to sync collection content from gallupc.ansible.com, which I give in the URL. I'm just naming it basic. It has that same requirements file that we had from the previous part of the demo. I made the remote. Now I'm going to make a distribution. A distribution is a pulp term for taking a repository and making it accessible. It's accessible at this base path argument. I'm going to put foo in here. You're going to see foo in the URL. This lets you have content and choose where in the web server to make it available. I'm going to make a distribution that's going to take the repository and make it able to be interacted with from the CLI, for example. Now we have a distribution. We can go and view it through the Galaxy API. This is a web server. This is a container API. This is basically what the CLI uses. Let's go ahead and look at it because this is a great way to look at content in pulp. You can see there's no content in here because it's available, but we haven't done anything with it. The next thing we're going to do is we're going to sync the content. That's with this pulp ansible repository sync command. This is actually performing a sync similar to what Galaxy NG does. Only it's done through this API call. That is done. Because that sync completed, if we look back, if we look back, where previously there was zero content, now there's two content. You can see it brought down pulp installer and posted on the POSIX collection. Just like what you saw on Galaxy NG, only this is using pulp ansible. Next you would want to install from the CLI. To do that, you want to look at the distribution list that you have. You can do this with the pulp ansible distribution list command. Sorry, I'm getting my command pasted here. The distribution has this base path of foo and actually it has this client URL here. This is the URL that you'll end up handing to the ansible Galaxy CLI. You will end up installing content with a command that looks like this one. You can actually specify it right on the command line. This is that same URL with foo in it. I'm going to tell it to install the pulp installer. Let's go ahead and do that. Let me remove the previous installation of it. This will do the install. You can see that it brought in that content hosted from pulp ansible there. That is all I'm going to show for that. At this point, there's more demo content in here, but I'm going to stop here because I'm out of time. I'll just take you through the slide parts of this. There's uploading CLI. You can upload content directly from the CLI. It's pretty similar. You just end up using the publish command ansible Galaxy collection publish. You end up giving it that same URL. There is no approval workflow with pulp ansible. You end up just once you publish it, it's immediately available. You can then do role things. This is a very similar process. Only the remote that you create is called role. You end up giving it a URL like this one, which specifies role content by virtue of the filtering that you tell on the Galaxy API. If you left this part off and you created a remote, you would literally be mirroring all of galaxy.ansible.com roles. You perform the sync. It's the same process as collections. You install roles, the same thing. You end up handing it this dash S to point it at the container. You give it the name of the role that you want to install. You can configure the CLI permanently by putting a command like this into your ansible.cfg. I'm not going to demo copying, but you can see that there's a whole copying workflow for both role and collection content. This lets you organize custom repositories with mixtures of content. Maybe your custom content layered on top of content from ansible.gallaxy.com. These are some places that you can get help. There's mailing lists and stuff on Free Note and also bug filing docs. If you have any feedback from me, please email me at this address. Thank you for your time. I also want to say thank you to everyone who helped create this project.
This talk will demo the setup and features of an on-premise software for storing, mirroring, and distributing Ansible Collection and Role content. This is analogous to an on-premise version of galaxy.ansible. To get up and running quickly, we’ll be using a pre-built container with pulp_ansible and galaxy_ng. A single container is all that is needed for setup. Once setup, I will demonstrate: - Creating one or more repositories to store Collections and Roles - Installing Role and Collection content using the ansible-galaxy CLI client from these repositories - Synchronizing Collections and Roles from galaxy.ansible - Uploading Collection content via ansible-galaxy CLI - Copying Collections and Roles between multiple repositories, simulating dev -> staging -> production environments - Perform these operations using a great UI, whenever possible, and APIs otherwise
10.5446/53636 (DOI)
Hi everyone, my name is Ina Panova and I work at Red Hat. I have been working on the PAL project for about 7 years and I am based in Brno. Today in this talk we are going to talk about register native delivery of software content. In this talk I will focus on specific use cases, talk about their benefits, what to avoid and in the end I will show some demos. So let's dive into that. Use cases. Use case number one. Content regardless of how it's packaged. In tinder registries are becoming an important source of software distribution. You might ask why would I package content in a container image? Because a container image includes an asserted collection of software, often hundreds of software components. This format facilitates use of software since a complete set of needed components are delivered at a single unit. Whether it's an RPM, Python package, Ansible collection or arbitrary file, you can make all this software content available in the container image. Later in this talk I will show you how to manage distribution of your own private content that you don't want to expose on the Internet without your control. Use case number two. Containerize your application or build execution environments. Containers are increasingly popular in development environment. Nowadays developers can build lightweight and portable software containers that simplify application development. It allows to specify the environment in which we want all of our applications to live within a simple container file. This effectively enables an easier collaboration and saves us from, it works on my machine problem, that is so prevalent in development teams across the globe. Having that said, containerization simplifies development and testing in a production-like environment. And speaking of testing, execution environments come in handy in this case. When building execution environment, container environment of variables, dependencies, config and other things needs to be defined within a container build. But then later on you can execute it and have an isolated reproducible environment. So I have briefly talked about use cases and benefits of registry native delivery of software. What is needed to actually build a container image that would contain content you want to include? Usually it all starts from defining a container file. In the first place, when defining a container file, you always need to carefully define the dependencies, so you have them at hand. In some cases you would need to provide configuration and requirement files and make them available at the build context. In most of the cases you build off your image from a parent one, so you need to specify and have access to the base container image. In order to simplify the process of the image build, it's good to avoid certain patterns. So let's go for some of them. You definitely don't want to rely on the external infrastructure, instead it's better to stick to some good practices. Imagine you're building a container image and why do people usually struggle with the lot? It's dependencies. For example, your environment you're working in is pinned to a specific version or your software has some external dependencies. You never know if 100% confidence that tomorrow this version of this specific package will be available on the Internet on the source you're expecting it to find. A good practice would be to instead take control of your software package versions you're relying on and own your own dependency pipeline. Ensure no version had disappeared in the wild. Another good practice is to cache the content locally, save on the download time. Maybe you have low bandwidth or maybe in your organization you have limited access to the Internet. You should take this into account and all of these aspects whenever building container images. In addition, some software can be really large. For example, container images can weight gigabytes. You do want to have this big image cache locally and not the low-dose bits at the build time of your own image. Curate and organize content in a way that is most adapted to your workflow and pipeline. Since a container image is an asserted collection of software, it can contain RPM packages, Python packages, arbitrary files and many more artifacts. You do want to have one single centralized tool that is capable to store all these types of software package in an organized manner. Otherwise, it's a pain and you need to figure out how to deal with each of those types separately. For example, for RPMs you would need to set up at minimum a web server that will serve you this content. For container images you would need to configure your own private registry. But what if I told you that you can do all of these with just single tool? And this tool is called Pulp. I won't focus in depth on what Pulp is, its architecture and workflows, but I will give you a short introduction so you have a generic picture of what we're talking about. So Pulp is an open source project. It's available on GitHub and written in Python. The latest development version is Pulp-free. Pulp is a plugin-based system, meaning that every written and available plugin can manage the content type you need. So far we have support for RPM, Python, Ansible and container software artifacts. The full list can be found on our docs website. So what can you do with Pulp? With Pulp you can fetch software packages and create a local mirror of the remote source. For example, you can mirror even the whole PIPI, given you have an app disk space. I think today's PIPI size is around 7 terabytes. Or you can mirror a selected set of content if you know exactly what versions and what kind of Python packages you will need later. By mirroring content locally, you will be sure you have at hand all the versions you need and control dependencies for your software. With Pulp you can upload and distribute your own software, which you don't want to publicly expose. For example, you have built your own private Python package and you don't want to expose it for the PIPI. So what you can do instead, you can upload it to Pulp and distribute your own Python content by hosting your own private PIPI. Pulp can organize software packages in repositories. It can help with the content promotion as well as distribute a controlled set of content for different environments. You may also want to have an immutable set of software packages to create a reproducible environment. There are plenty of use cases when you may want to have a reproducibility. After content curation and repository management, you can host those repositories and distribute the content. One of the features that Pulp provides and can help you with the image build is called OCI Image Builder feature. What happens behind the scene is Pulp will take the provided container file as well as any other files provided that are needed at the build context and will build the container image, which it will subsequently upload to your private container registry. So now let's connect the pieces and let's show some demo. I will show you how to continue your application, build image from a container file, upload to Pulp and host it from Pulp container registry. In my case, I have decided to continue the Python application. It's a really small Python app that uses Flask, the HTTP server for Python application. Now let's have an actual look at the container file. This is a pretty small and straightforward container file, however, I'll try to go line by line and explain what is happening here so we have a full understanding. So as you can see on the first line, I have specified the base image from which my image is going to be built. Note that this image is served from my own private container registry. This example is using Python 3.7, which is installed on Alpine Linux. It's a small minimalistic Linux distribution, which keeps the images small. That's the reason I actually picked this one to keep the image small for the sake of demo. Because I have mentioned previously that Flask is a dependency, it is specified as such in the requirements.txt file. Here is the file and you can see that Flask is there. So I'm making the requirements file available at the build context file. Later on, I'm installing all the dependencies, which are specified in the requirements.txt file. However, please pay attention that I'm serving those dependencies from my local private PyPI. The next step is exposed port 5000. I have specified the entry point and as the last step, I'm running my application. So what I have done ahead in preparation for the image build, I have created a local mirror of the container images I plan to use as well as the Python application dependencies. Let me show you. So here is a container repository, which has a name demo. And that repository contains the Alpine 37 image, which I have actually mirrored from Docker Hub. So this Alpine image is available locally. And with the Python package, dependencies is similar. Here I have my Python repository, which I have named my application dependencies. So this repository contains all my application dependencies, including the flask. So what's next? I have created a simple script, which calls into the PULP REST API endpoints. I don't plan to focus on each call and endpoints simply not to overload with information. However, while I'm going to go for the steps, sometimes I will stop and explain more in depth a specific API call we will have interest in. I would like to mention that we are actively working on the CLI, meaning that all these workflows will be facilitated eventually with CLI. Okay. Let's run the script. So as a first step, I need to upload requirements to the text file to PULP. Why we need to upload it? We need to upload it because we need to make this file available at the build context. So by uploading it to PULP, it will be stored as an artifact in the PULP storage artifact. Similar step will be with regards to the my application files. And since my app is very simple and small, it has just index.py file. The procedure is similar. I will upload the file and it will be stored as an artifact in PULP storage artifact. As the third step, we are going to create a container repository in PULP. Eventually this repo will contain the image we're going to build and upload to PULP. I have named it my containerized application. Here starts all the magic. At this step, we're actually building the image and uploading it to container repository. So here I'm going to explain what happens behind the scene. Here's the API call. And as you can see, we're providing the container file to this API call. And this endpoints accept a JSON string that maps artifacts in PULP to a file name. So any artifacts passed in are available to build context. That's why just a couple of steps previously we have uploaded the requirement. The XT file and the index.py file. So once we hit this endpoint, a task will be triggered and in the sub process build up, actually, we'll build the image for us. Then we'll take the image all behind the scene and upload it to PULP. Let's try to inspect the repository. It will take some time because it does take time to build images. That's why I have picked a small image, the alpine distribution. So we have a repository. Every, we know that every image is composed out of a manifest and blobs and usually that images reference by attack. So our image contains seven blobs, one manifest and one tag. Since we haven't specified any specific tag, it's tagged by default as the latest. So at this point, we have the repository in PULP, however, this repo is not ready for distribution, like no one can see it. This is actually the time where you can decide what to expose for consumption and when to expose it. And the steps, the step which is responsible for the exposure of the content is you need to distribute it at repository. After you distribute it, it will be available for consumption at this location. At the mycontinuerize app, which will be served from your local private registry. What I would like to do next is to spin up this container. Like this, at the port 5000. And as you can see, the plus is serving our application. Another thing which I would like to show you is that the dependencies have been actually installed inside of that image. So what I'm going to do is I'm going to run this container in a detach mode. And I'm going to execute inside of the container a pip list comment and I'm going to grab for the flask presence. As you can see, the flask is there. That's pretty much it for this demo. In this demo, we have learned how to continue your application. And the example was how to continue your eyes a Python application, however, you can do it with any other, whether it's a goal vacation or something else. Then we have learned how to build our image and how to expose it for consumption. Now switch back to slides. So another demo that I would like to show you is how to ship content regardless how it's packaged or how to build execution environments. How other than I realized that the container file would look pretty much similar to the one we have just went through. The only difference would be in one step and this step would consist in uploading your own content in the pop first. So, for example, if you want to ship your own private content inside of a container, first you would need to build it and upload it to pop. So pop can serve it while building the container image. Same would be for building the execution environment. If you plan to build some based parent image, you would first upload it to pop and whenever you would build your execution environment, it will take that image as a base image. So they just as more or less the same. So guys, that's pretty much it for me. Thank you very much for your attention. I hope you learned pretty cool things from this talk. Please don't hesitate to reach out to us on a free now channel. We are available there for you on pop or pop dev channels. You can also reach out to us via mailing list. Please also go and check out our website. It contains a lot of useful information which will help you understand how pop can help you or if there is any use for you in there. And in case you would like to contribute, feel free to check out our open source code which is available on GitHub. We will be happy to receive contributions. Thank you. Hello, everyone. We both had Wi-Fi issues yesterday. So just in case we suddenly disappear, we will be looking at the main chat. We have a new chat with Melanie which Melanie already linked in the chat. Melanie, if you could do that again. So you can join us there and ask some questions there. So, let's see if there is anything there. Hey, am I audible? And we already have a question. Can the image be automatically built when the parent image is updated? It depends how you have tagged that image. If the image has been tagged as latest, the parent image, and whenever you build another image on top of it, so you will just pull on that latest tag. You will still need to perform those steps I have shown in the demo. There is one more question. How complex is to write the content plugin? We have a plugin writer's guide which facilitates the contribution, so in theory it shouldn't be difficult to write the plugin. I can paste the documentation links in the chat. Thank you. For anyone with us live, we already have in the chat the link to the stand again and also the plugin writer's guide. We've got another question. Do content plugins have to be written in Python? Yes, because Pulp is a Python project. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks for watching.
Container registries are becoming an important source of software distribution. Why package content in a container image? A container image includes an assorted collection of software - often hundreds of software components. This format facilitates use of the software, because a complete set of the needed components are delivered as a single unit. In this talk we look into how to ship content regardless of how it is packaged (rpm, python, ansible roles) in a container image and build the image with just one single tool - Pulp3. With Pulp3 you will be able to take advantage of software distribution using the container first strategy : - Containerize your application: build and run application in a container; - Build execution environment images - which provide security features such as isolated execution and integrity of applications; - Cache the content - allow container images to build without relying on external infrastructure by caching(or permanently storing) the application and dependencies.
10.5446/53637 (DOI)
تصوير heart rate Our team has launched two years ago Hello everybody, this is Abdel Noor Today, we got a story for UFOx Guess what It's about Redis management function with Githubs It's about converting the manual operations of a Redis management team into a software operators 2 years ago The story started by joining Redis management team Where the team served more أكثر من 700 ممبا في مدينة تقديم التكنوليسية تحت إدارة تقديم التحديد والتحديد بذلك في المنزل، my career has been shifted from pure software development to release management and release engineering أذكرت في المشاركة، عندما أسأل المشاركة، كيف يجب أن تقديم تحديد التحديد في مدينة تحديد التحديد؟ لذا، نرى، نرى اليوم كيف أخبرت التحديد والتحديد والتحديد لأسفل التحديد والتحديد والتحديد والتحديد والتحديد بل، أبدا نور يمكنك أن تقوم بأي من أجل مدينة تقديم التحديد كأنني أم مدينة تحديد التحديد في هذه المنزل أحب أن أبدأ my career as a software developer حبني容易 الذي أستطيع와ك ان부�صار التحديدينario والذ care...... أحب there is an Again أ Спасибо م잖아요 ل었습니다 أولا، أصلك agradا نخضير لأحزن سأحب لك بعض الأشياء لا أفتحين على움ار أكثرا, تحكي now كذلك أفعيل trees جاades landscapes Why GetOps ، لذلك لدينا تباق RecDR wait لحقيقي Youtuber و أول Ever then we will move what is the release management and what is the team challenges then we move on to the solution that we proposed for scaling the release management then we will see the GitOps in action or let's say the solution implementation and finally we will conclude with what is the next in GitOps فبدأ right away what is get ops and why get ops so basically x y z ops means that operate with x y z so this is not a common formula but this work with get ops so get ops mean operate with it not simple than that and it means get ops it means that you will change your way of operating things even if you have a good level of automation you will need get ops to organize this automation get ops is about shipping your operation into software so how to that this is what we will explain today with the release management use case but the idea here if your operation is codified and your operation become a software you will be able to apply all software practice practices on your code including reusability clean code testing so on and this is why get ops so you will benefit from version control because your operation converted to code and the code can be versioned using get you will get also a tough history and integrity you will also leverage the peer review because it become code so you are using branching model and your branch you will merge the branch and you will ask for someone for review so here you can also get some review from your team and you can mitigate the risk of changing and you can revert back roll back so because everything in the history for sure you can go also with more confidence because it is code so you can write your unit test your structure test maybe for container images or for VM images so we see that and finally you will preserving the intellectual property of your enterprise why because if someone left the intellectual property is preserved as code so now move on what is the goal of get ops before talking about the release management team and their challenge so why what is the exact goal of get ops and here we all know the common story between dev and ops where development team wants to release new feature frequently while ops team tell them that you are impulsive guys and you have to slow down because systems need to be stable so get ops comes to solve a big problem in operations which is keeping systems stable yes but also having the ability to change frequently so that's why the ops team will be promoted from ops to get ops and here there is a culture change and there is here a mindset change where the operation need to think like developer and need to treat and to convert the operation to code and need to go with the same flow of software development lifecycle or sdlc so that's why with that once the ops team become get ops team that time the goal of get ops it will be happen and will be effective which is the reliability of the system so reliability doesn't mean that you will not change but you can frequently change while relying on something that cannot lie which is code all right so move on now to what is release management in other hand and after that we will merge together and we will see how it works so release management in general is the you know this is release management is the process of managing planning the release and the release is communicate code change to production yeah so i have here code built by developer so basically you have here the build generated by software development team then the release management is about building the software compiling the code then going until production through some quality gates and others and here you may have some scheduling some planning because for example some dependency need to be checked okay so you cannot go to production maybe you are releasing a new feature and this feature need training for your customer so you stop here and you you need also to schedule for this release all right so this is basically what is release management and now move on what is the challenge of our release management team so the first challenge is that release management team doesn't do only release management which is process and planning but also it is about release engineering so we are also involved in the cicd pipelines the technology of this release management so basically this is release engineering for release management so this challenge already makes the scope of the team mission wider another dimension of challenge that we have also the build environment size imagine that the build environment includes more than 700 build plan more than 20 technology to build the environment like for example any package manager as you know maybe go maybe maybe maybe others and this technology has different version up to 90 version and this is basically the size of our build environment now another dimension of the challenge guess what it's not only about build environment installation configuration yes this is the responsibility of the release management team but also in the release management team we need and we are responsible to maintain the underlying on prim infrastructure not from hardware perspective even not from virtualization perspective but from maintaining the os libraries clusters and installing the cluster maintaining the cluster installing application out of the cluster this is the second layer and this is how it works so basically we maintain the whole stack then to make the build environment more effective the environment need to be integrated with other tools part of the delivery chain which is vs tool or what we call also value stream tool chain so these tools like for example pass apm management alm tools itsn tools so the build environment and the pipeline environment need to be integrated with it with these tools to be more effective now that we have this ready we had to roll out roll out and make this icd tool available to projects to the business to more than 100 projects right and in order to scale we enable teams the delivery team to do this pipeline by themselves they just need to use our pipeline system as a service but we cannot scale like that so we also think about providing the team reusable assets that's why we provided a pipeline library and this is also under our responsibility so the pipeline library gives a lot of features for the team like for example some ready function building blocks to call this stage build the stage deploy stage maybe also stress test just one function with some parameter that the development team or the delivery team can use it right away but you know what this is not only the mission we have also we are providing also some reusable assets that called by the pipeline so this is a set is very critical so the pipeline written by the team will be code and this code need to be maintained again and this code can be a huge code because you know the automation maybe need a lot of scripting so we are also here providing and suggestion before that some tools some technology to automate things in the clean way so for example we use for example helmi chart maybe we use for example ansible roll so we are also responsible to providing this reusable assets that called by the pipeline and mitigate the code of the pipeline and make the code more إليقant and clean so imagine this is the scope and this is a challenge faced by the release management team so if you think like where we classify this team for sure basically and initially this team should be classified as ops team so we are operating all these things and you know to overcome and to overcome all these challenges and to scale quickly we realized that the manual operation will not work or even the non-organized automation will not help us to scale then organize automation like one of guy has some script the other wire has power shell script the other guy has batch script you know so this also will not scale so alternatively we thought about adopting gtops for release management operation and the solution now is not only adopting gtops because gtops is very clear so yeah you put your code in git and you will leverage all life cycle of the software you have request merce i cd then you deploy it okay however now our solution is about what is about what is the prerequisites to codify each layer of our work this is the challenge this is and this is the solution come from there and the idea is now we are asking what is the kind of operation is it for example cluster maintenance it is for example built environment readness it is pipeline library so based and depends on the operation the kind of operation we ask some question can be codified first of all is it ready to be code and to answer this question we analyze each kind of operation by by checking this criteria is there a developed framework for that is there a test framework for that is that is there a package manager for that and once we found this and one we found generally that the 12 factor can be applied that time all right so that time we can go immediately and we are ready to put it you know as code this is the idea of our solution and how gtops work with release management now i will show you how this work with all our layers so this is what we saw together in previous slide that we are operating and we are maintaining all these layers and this is our maybe daily work all right so now this is gtops in action this is how it works so we for example for the cluster installation to codify it and to work with gtops we have playbooks in gt repository and this playbook will present the operation of the cluster then for example the other we have also helm releases another repository for helm releases we have many repository helmet chart repository for each chart and this repository are used for to install the application on top of the cluster to make the build environment up and running and also to maybe to integrate with some tools all right and we have also another repository for the build environment because you know the build environment need to build a software and building the software require a builder image in the containerized environment so we need to create images and we can create image by simple docker file but this docker file will be missed out we will have we will end up with mess of docker file so here also we have software which deliver images based on a code on docker file and other things okay and we have also for the integration specifically we have some also software that integrate for example pipeline service with artifact service or maybe integrate the pipeline service with the secret service so on and so forth and for each software is delivered as code and we have also pipeline library that is here to present the pipeline library and templates but it's also used to integrate with the tool so it has some function that integrate with the tool like call api and others and also it impact the build environment so you can for example if someone from this team use this library he can pass the parameter that he needs for example more memory and cpu for this for his agent by just simple code you know and this library should here create and the build environment and run it in a dynamic way so the pipeline library presented that layer but it impact many other layers and we have for example for this reusable assets we are providing some helmet charts and cpu role and maybe terra for modules so this is the idea this is the kind of the chart so this is this is helmet charts for the cluster maybe for the application but this helmet chart will be used thereby the pipeline so helmet chart for application x helmet chart for environment y and finally we have the pipeline as code which is the interface between us and the teams so the team that doesn't need to deal with the tool they just need to deal with the service through the interface basically this is the top in action now i will show you also two example how it works in the end-to-end workflow so let's move on to this example so let's say we have as release management team they have a new task which is optimize builds of technology x optimize builds mean that this build takes a lot of time maybe we need some cache cache the dependency you know cache maybe some some application dependency so the subsequent build will finish quickly so maybe after analyzing this task this task maybe require many things maybe require to create a persistent volume claim which which is the volume where we will cache so that time we will have what we will have to maintain our helmet chart maybe we need another helmet chart for persistent volume claim or we need to update the helmet chart to support a new cache all right we also need to apply this helmet chart using the helmet is repository all right so before that before even this so the helmet chart will go through the pipeline the build start maybe say there is some linking some test quality data passed deployed and the helmet chart is available on the artifactory or the artifact system now the helmet release the helmet release here we'll use the helmet chart here and we'll update the helmet release with a new volume that will be created so it will use the last version of the helmet chart and the build start you know and start the until what and it deployed to the cluster and the persistent volume claim or the volume is ready now in the cluster so we can move on and apply to the pipeline library so the pipeline library support a function to like feature toggle i want this feature or not so for example maven cache true which means that the pipeline library will take this boolean and will create the cache or will attach the cache or the volume to the agent of the build and the pipeline library also has its own pipeline and we have by the way here tests unit tests for this library and the library will be deployed and will be available again in get as tag you know and once it is available the team here can use it for their pipeline so this is the pipeline services is the application basically this is the business all right this is the business so all this work here happen will be used immediately here the pipeline library already inherited by the team used by the team imported by the team so once we change it here once we release it here it will be used immediately this is one example of operation with git or gitops for release management another example and we finish by this example which is if we have incident and let's say we have post mortem where we have the incident retrospective and we found the root cause of the issue by the way this happened also and after that we found that the application we have one of the release management application is crashing is restarting frequently so we need to fix it okay so now with gitops you can see that the incident fall or the incident remediation fall into a code so we will have issue created in the issue management system again so we will have issue created in the issue management system again so we create the issue task or story or something like this so here maybe hot fix okay so this is hot fix all right and now we have to fix the frequent crash of the service x so after some analyze we realize that application is crashing because other application throttle this application because we didn't specify a limit for memory and cpu so we need as fix we need to review all application running on this cluster all right and that time we need to specify here in the helm releases we will need to specify the cpu and memory for each and the limit for each application and we will deploy all application will be updated and the application will be our up and running on top of the cluster with the new configuration which is limits for memory at cpu and that time the application or the fix is released and we fix this crash so this is how it works this is the map of the delivery not only the business have the pipeline but also the release management team has his own pipeline because he was able to codify his thing and to bring his operation into code finally what is next i mean what is next after the technical benefit of githops what is next is the business value of githops so as business value there is here business continuity with githops why because without githops it is hard to catch the operational knowledge of your team and you have to manage the risk here because human being can be sick can leave the team can leave the company however with githops the operation of knowledge is shipped as software through and to and automated pipeline the team is building software and the software is delivered automatically resulting on operating things quickly and automatically the other also advantage is killing mindset of credits why without githops each team member wants to show off wants to approve that what's done he does it just to get the credits with githops each team member is developer where he push the code then the verge control system will take care about who did what and when فتاة و with very tough story history also another advantage is breaking the silos how without githops the opportunity of collaboration among team with different function is very low but with githops many team can talk with each other through requests for example my repository my githopository has inventory list and i will update the inventory as software developer however the responsibility of reviewing this inventory is with the operation team so and because i am using for example code ownership so automatically if i change this file a new reviewer will be added to the request from infra team or from operation team and he will review my work so here all teams join the same dashboard which is the request and can review each other and can make the delivery go fast by this shift left review and this is how we break the silos using githoproquest and finally githops will make the change more reliable why without githops the team executes the change directly on the systems so the potential of failure the potential of risk is high and that's why we without githops we have the individual responsibility and we take care about makes things stable and to not touch anything with githops the code which presents the operation on load knowledge pass through many quality gates until being deployed and this is how it works so guys we reach the end feel free to drop me in question you have now or to contact me
Transform Release Management role from System administration to software development for Release operations thru GitOps practices Shipping operational knowledge into a software is a big milestone towards better configuration management. In this talk, I will explain how we introduced configuration management practices into release management team while leveraging the software engineering principles during this journey. I will start by clarifying what problem(s) were we trying to solve, namely: Lack of system Reliability, huge dependencies among silos, Rework between Dev, QA and release teams. Then, I will explain the solution that I proposed to solve these problems, namely: Service Offering Model, Killing environments gap and treating everything as CI (Configuration item). After that , I will move forward on challenges that I faced while trying to solve the issue: migration from Legacy system, security compliance & disconnected environment, .. and others. Finally, I will give overview about the solution implementation.
10.5446/56900 (DOI)
you Hello everyone and welcome to the declarative and minimalistic computing dev room at FOSDEM 2022. Like last year, FOSDEM 2022 is an online event. The last 3 years have been really hard for everyone and as we continue to learn living with Covid, he is hoping we will be able to meet you all again soon. This talk was prepared by Oliver, Piotr and yours truly. Professor John McCarthy once said, Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their programs as a servant whose master the user should be able to control it. If designers and programmers think about the apparent mental qualities that their programs will have, they will create programs that are easier and pleasanter, more humane to deal with. With his work, Professor John McCarthy pioneered artificial intelligence as we know it today and gave the world the gift of the list programming language and all the list dialects that were inspired from it. In a sense, he kickstarted the modern computing era. But his words were forgotten. Today's programs try to control how the user interacts with them and the tools we, as programmers, use to develop are becoming more and more strict on what we can do and how we can do it. Let's discuss a very common scenario. We will assume you want to develop an Android app. The official way of developing an app is through Android Studio. You are writing your app, you compile it, you run it, hoping it will do everything in the expected way. And of course, it will fail. It will fail because something happened that was not supposed to happen. You will then have to debug the program into correctness by following the state of every variable to figure out what is causing the issue you are having. Is this what we really want? But if I could tell you, there's another way. You can write code that will state what the desired outcome is. It will get an input, it will return an output, and it will not touch the state of anything else. And all that while being able to output these changes live, allow the developer to directly see what they are working on. This allows for much faster prototyping in early stages, while still being able to reason exactly what the code does. And of course, later, deploy to production without having to rewrite everything, as is the case with another language named after a snake. This is why Lisp is the second oldest language in existence, because of the power it gives to the programmer to do what needs to be done, without worrying about how it will happen. Lisp frees the programmer to think of solutions, not problems. In companies, it is said that one should worry about the competitor only when they hear they are using Lisp. What is Geeks Europe? Well, Geeks Europe is the legal, non-for-profit association with the aim of providing the legal home for the Geeks project. Geeks Europe, just as stated, is a legal, non-for-profit base in France, with most members consisting of core contributors, or at least persons who are strongly interested and passionate about the potential that Geeks has as a software platform to resolve problems or issues in the computer and software domain. So please consider joining the association to protect and help advance the state of Geeks and declarative and minimalistic computing languages for that matter as well. Now, any questions? We hope that you will enjoy all the other really great talks from our speakers, and please make sure to watch the talk from William Burd called the Rational Exploration of Markarthnes. Now we will be available for any questions. Thank you for watching this talk. We hope that you will enjoy all the other really great talks from our speakers. Please make sure to watch the talk from William Burd called a Rational Exploration of Markarthnes. Now we will be available for any questions. Okay. Thank you everyone for coming. I hope you will enjoy our lineup of the really awesome talks that we have. Ideally, I wanted Oliver to be online right now, so he could continue with his talk. But unfortunately for technical reasons, he's not here. Oliver, if you can hear this, please join. Yeah. So is there anything you... Okay, so I'm a bit... Okay, so a question from Piotr. What was the hardest thing about organizing this day? So it's... First of all, you have to make sure that you define the purpose of this dev room. It is important to make clear what can be included and what is the purpose of this dev room. In our case, we want to support any language that promotes what we discuss in this talk. So... Yeah. Yeah. And like we need to... Then when we have decided what we want to consider, we need to make sure that we attract the best speakers we can get. And as you will see in the rest of the day, we did. We have really good speakers. And yeah, by the way, it would be really nice if you could vote the questions so I could see them on the right. Because the way the UI works, I have to go back and forth. Yeah. Anything else? By the way, it's also a good opportunity to mention Geeks Europe because Oliver would want me to do that. Which is... Give me a minute. Christine, thank you. It's really nice to hear that. Like we are trying every year to improve and grow as a room. And to attract even better talks. And also, something really important for our dev room is to be as open as possible to everybody. We try to... openness is really important for us. And shortly we will start to continue with the next talk from JJ. So about McCarthy, I think that the video makes a much better work of giving him a really short summary of what he did. McCarthy is one of the people that we were lucky as a human species for his work. And we wouldn't be here with all... We wouldn't have all these languages if he didn't start with all of this. Yes, please make sure to watch William's talk on... William's talk later today. William is one of the best speakers we have. We do have a lot of... all of our speakers are great, but William is really good. Well, Python, I try to avoid saying this name here. Also, to avoid anybody thinking that I may have taken a dig at Python. I cannot confirm anything. From experience, like from all the projects that I have worked on, every time we started with Python, we ended up rewriting everything, something else. In our case, it was Clozure. Yeah, so later I will share a really nice paper on this... on this room about writing in a language that you can prototype and then use in production very literally, right? And Piotr, thank you for making me a bit less nervous. Right now I still... Okay, it should be less than a minute now. So if you want to ask anything else, or I think that's... It's like the prototype to production, it's really important because at the beginning you need to be able to prototype fast. And then, like I have seen many times in many companies that things that were supposed to be prototypes, they could be production for much... Like they might get to production without any rights and stay there and be really slow. And you're like, okay, what happened? Why is it so slow? And this is how we end up with really city services. Then somebody has to rewrite everything from scratch. That's what we solve with this languages. Okay, people, thank you.
Welcome to the Declarative and Minimalistic Computing Devroom. In this year's virtual conference we will honour the late Professor John McCarthy as the founder of AI and the inventor of LISP. McCarthy with his work pioneered artificial intelligence, developed the Lisp programming language family and kickstarted our modern computing world. Lisp is one of the two oldest computer languages in use today.
10.5446/53556 (DOI)
Hello, I'm Pierre Nighthart, and I'm going to showcase my development environment, which is based on a rather unconventional shell. There are many features to demo, so without further ado, let's get started! First thing first, what's wrong with the current state of affairs? Let me replay a shell session that I went through the other day. I wanted to collect the checksums of all the files with the name Nix. First, I switched to the right directory. I find that having to type the CD command is tedious, even with completion. Now I list the desired files recursively. Ok, let's collect the checksums. Whoops, first bit for, I forgot to surround a variable name with double quotes. But let's grab the first result this time. Uh oh. Wait, this wasn't there before. What's going on? Uh huh. I have a file with a new line, and it seems that the textual representation of the new line differs between LS, find, and grep. Grep mismatches the new line for an N. Now what if we use another tool, like find? This returns almost what I want. But if we were not for show and sum, displaying a spurious backslash for the file with the new line. Finally, if I want to copy this result, I am forced to use the mouse for the lack of better keyboard controls. Let's review the list of issues I've been trying to address. First thing is text manipulation. Maybe the most basic feature is to be able to search the whole shell buffer just like we would do in any other text editor. We should be able to copy and paste using keyboard shortcuts, not just the mouse. Maybe a bit pancier. I'd like to be able to navigate the prompts, but after all, isn't it the most natural kind of sectioning for a shell? Then performance. First, bash and france can be very slow, but maybe most importantly, the idiomatic way to filter and process outputs is to spawn a system process for each unique's tools, such as grep, sort, etc. And last, the typical unique's shell workflow becomes problematic when you want to reprocess an output. What you would do is rerun the command that produced the output in a different pipeline. This can be a showstopper if the initial command is very slow. Referring to the output using back references is one solution to this problem. Debugging. Wouldn't it be nice to have state-of-the-art debugging tools, like interactive stack traces and introspectable objects? As but not least, handle structured data. List hash tables, objects, you name it. In particular, in pipelines. In bash, the file list is typically passed around as a string buffer. Ideally, we should be able to pass lists of files objects. Let's get down to the demo. This is Sly, my common list rebel. It displays a typical multiline prompt with the current working directory at the top and to the left there is an index, here, 0, which is the back reference index. I will come back to it in a minute. Let's start with a simple example, say a calculation of a big power. Ta-da! That was fast. Evidently, common lists can handle large arbitrary precision numbers out of the box. So it's pretty handy as a pocket calculator, even for more advanced math. Conversely, bash is pretty bad, even for simple calculations. Let's move on to a more typical task. Compute the share one sum of a file. Let's take this.png file here. Notice that I have tried completion. Okay, now what if I want to compare it with the value that was recorded in the.sum file? Here. I could compare them visually, but that would be a bit tedious. Instead I can leverage the language together with a very important concept I'm going to present here, back references. On pressing hash V, all the previous results get prefixed with the index which I can complete upon. On pressing zero, the non-zero prefixes disappear and my hash V turns to orange, thus validating the match. Let's repeat it with the second result. This returns T for true, meaning the strings are identical and the checksum is valid. Previously, I mentioned the issue of search and copy paste. Since my repo is actually an Emacs, I can use all the features of the text editor, including search. Say I want to look up the term. Term, they all get highlighted as I type. Then I can cycle to the results. When the cursor is on the previous prompt and I press enter, it paste the whole command to the current prompt, ready for editing. Of course, I can also move around with a keyboard, select an proportional text and copy it. Next, I'd like to introduce the selector library I've worked on. When working on multiple projects, each of them using multiple shells, it's only too easy to get overwhelmed with the amount of open shells and keep stumbling around to find a desired shell. To the right, the list of open shells, only one is open at the moment. Let's open a new shell and call it project. The new shell popped up at the bottom. This shell shares the same underlying list process as the one at the top. So if I were to define a global variable, checksum, holding the value of the previously computed shell one. They will be understood in both shells. What if we want to open a new shell using a new unrelated list process? Here I have a list of predefined list processes. Some contain preloaded libraries for working on the next project. Some use a different list compiler altogether. I can even run the list process in a container with limited file system access. Let's pick the first one. In this new shell, if I enter checksum, the shell will expectantly complain that the symbol is undefined. Shell management goes beyond just that. I can rename the shell, say, to my project. If I review the list of shells, I see that I have three of them. I can fuzzy search, narrow down, or even select multiple shells. If I press enter on a selection, it displays all the selected shells on screen. Let's showcase the prompt navigation feature. I can navigate the prompt up and down like I did before. Not only that, but I can fuzzy search all the prompts in all open shells. So if I were to search the term checksum, the view gets updated to the shell containing a matching prompt. Let's enter, confirm the selection, and brings me to the desired prompt. Another itch that I've been scratching for too long, directory switching. Instead of typing CD, I want to use my regular file manager. Here it's Enix Helm, but it could be your GNOME or KDE file manager. I can fuzzy match against directories. Then pressing a keyboard shortcut brings me to the selected directory in my shell. It's just as easy to go anywhere, say to the root directory, and from there, I can invoke the history of selected or visited directories, which is also fuzzy searchable. And in a matter of just a few keystrokes, I'm back to where I started. Similarly, the history of commands is also fuzzy searchable. Say I were to search the previously computed checksum command, notice how it narrows down live, and how it successfully matches regardless of the order of the terms. While this shell is primary list, it also understands string interpolation, just like in bash, say I were to get the PID of the list process. Now I could query the command that started it by inspecting the press entry corresponding to it. So I can dollar escape the PID variable, just like I would do in bash. And here I get the full command. What's interesting about this is that this is not standard common list. Here the syntax of the language has been extended to understand the special strings. In the same way, the syntax can be extended at will to make common writing as convenient as possible. Say you don't want to write any parenthesis to execute a simple command. No problem. I can write any valid bash command directly by prefixing it with a special shabang. Again, this is not standard common list. This is an extension. Let's consider the more complex case of pipeline. I want to return the three most frequent terms in a dictionary of words containing duplicates. Here is the dictionary. Now I filter it by the unique command, sort, and finally get the three most frequent words. Now what if we wanted a more less basic text? Would it be two verbose? I believe it does not have to be. With the right commands and keyboard shortcuts, it can be just as fast to type. I complete here and insert the pipeline, which is this column dash. Here I'm using a keyboard shortcut to automatically insert double quotes and the column dash character, which represents a pipe. This allows me to type the whole command very fast with about the same amount of key presses compared to the pure bash version. It gets more interesting when I start intermingling common lists with the rest of the pipeline. For instance, let's change the number of results we want by fetching the value from any max buffer. Let's call it ZZZ. I insert two here, come back to the shell, and I have the two most frequent words. Now this is where it gets a bit mind-boggling. Everything starts mixing up together. The shell, common list, and the text editor. All that said, we're still mimicking the old pipeline behavior. These pipelines are in effect black boxes. You can't expect what's wrong if something breaks in the middle. So what if we took a different approach? Let's ask my package manager to build, say, Emacs. See here, the command returns two values. The first one is the stream output, and the second one is the process itself, which I can inspect in this new window to monitor the status. Here I see that the process is currently running, but if I refresh this window, now it has exited. So now that the process is over, I can indeed inspect the output stream, which gives me another output stream which can be further passed to another asynchronous command, like a pipe, and the content of the stream itself, which is the path produced by the Geeks package manager. This is the first building block of a paradigm in which we process pipeline data in an iterative fashion. Let's consider a more complex real-life example. I have a messy music directory with random files scattered across subdirectories. I'd like to clean it up, say, remove low-quality artwork and music files, and maybe remove the music from a specific artist, but say, un-lib the quality is too low. The finder command returns a list of all the file objects randomly cursively in all subdirectories. Finder is extensible, so I wrote a specialized version called media finder, which returns media file objects instead. These objects have extra media information. Now what's interesting is that I can pass additional predicates to further filter the list with the requirements. Say I want to match files which name matches cover and of the JPEG extension. Okay, so I can now inspect the result. Say I choose this file. This is inspector. It shows me all kinds of UNIX metadata, but also the MIME type, and I can inspect the media stream, which gives me valuable information such as the resolution here. By using regular common list, I can filter out the good quality artworks. That is to save those with the height above, say, 700, only keeping the bad quality finds. Now I can refer to the previous list. And pass it the height. Okay, so I have a narrow down list now. Previously, I've used filters like match name and match extension, but unlike the find UNIX tool, where the search rules are hard coded, here I can extend them and define my own rules to match files with a given bitrate or say songs by a specific artist. Let's define new bitrate in failure function, which will return a predicate. Now let's compare the bitrate of the file parameter. We get this bitrate from the media format slot and compare it with the argument. Okay. Now let's do the same with match artist. Again, we return a predicate. So lambda. We compare the string, which we get from the tags stored in media format. Oops. And we compare it with the argument again. Okay. So now what if I want to narrow down my search again? I call media finder again, this time with using the above predicate. So say I want to list the song with low quality bitrate, with a low bitrate. So I would say above 150k, matching the Sponger artist. And there you go. So now I have two lists, and I would like to put them together. So here I have the previous result. And if I want to put this result, so that would be number three. Oops. Okay. So I have concatenated the list. And this is the final list, which is the last opportunity that I have to inspect the list of facts I want to process before making any final mistake. And that would be it for the demo. Whoa. This was a lot to take in. I hope it did not lose you. Despite all these novelties, there is still room for improvement. I'd love to have better job control, such as to add a convenient way to stop and continue processes, detach them to the background, record the time they ran, and maybe more statistics, such as CPU and memory usage. Pipeline introspection can be improved, such that we would easily inspect the input and output streams of individual sub-processes, or even build a processed graph. Previously, I underlined the importance of using an extensible language for a shell. Racket, the programmable programming language, is an obvious contender to this role. Besides, there is already a rash, the racket shell, while the Dr. Racket IDE could provide a graphical interface. All I've demoed today is only available to E-max users, which is a shame, so it'd be interesting to have it ported to other more accessible interfaces. A popular option would be Jupyter, which is in many ways already a rebel, only lacking some more advanced developer tools like an interactive stack trace or an inspector. Another option would be the Nix web browser, which already features a graphical commonness rebel. It was a long journey for me to reach here. I was a long-term user of other shells and tools, going from the terminal-based FZE-F to E-max E-shell and Meta-X shell. The recent developments presented in this talk partly got inspired by Harvard's talk on Piper, who introduced similar concepts, but with a rather different approach. You will find most of my setup in my.files linked here. Finally, if you'd like to know more about the setup and implementation details, check out my website. I'm going to publish an extensive article soon, covering all this. I hope my talk inspired you. Thanks for watching and happy hacking. Pierre? Are you here? Hello. Yes, I'm here. Hi. Hi. Yeah. So, again, you answered a lot of these questions in text during the presentation. But let's go through them here as well so that they will be added to the recorded video. Sure. You could talk, by the way. Thanks. Glad you liked it. Yeah. So, the most highly voted question was, how do you find the existing commands? Is it just... Yeah. Yeah. So, I guess there are two types of commands you want to find here. One is the common list command. You can look them up using the traditional tools in slide, like a slide app for both, or just simple completion. Other commands you want to look up would be the external executables. And this, you can look up using an extra completion system, which you can install with a new Max package, or simply... Yeah, that would be mostly it. Yeah. That sort of answered the next question as well. So, are you able to use Unix-style commands? Yes. Something like LS or... Yeah. I think this is a very important point because the strength of shells like Bash and France is that how easy it is to just call a program and pass arguments to it. This is what you want to do, right? And in any programming language, if you want to do it, you have to call it function and write strings. It's not convenient. But the thing that's really powerful with these languages is that you can program the interpreter to change how it understands what you pass to it. So, here what I've done is that I've programmed the Shadang symbol to interpret what I would pass to it as a Bash command. It would directly execute it in Bash. So, this allows me to execute any command as I would do in Bash just as fast, well, with just two extra characters. Yeah. So, you're spinning that off to an actual Bash process in the background? So, yeah. Well, I like to do it here because you can actually do two things. One thing is that you can send it to Shell, which is very convenient if you want to run a snippet that's shared by your friends or on the website. And there's so much Bash around that we need to do this all the time, right? Or if you want something a bit more robust that doesn't have a string interpolation or white cards, then you can effectively, you can use a different symbol which tells the interpreter execute this command with these flags and this will not be passed to Shell. Interesting. Yeah. And then also some comparisons. So how does it compare to E-Shell and how does it compare to Babashka? So, the short answer about Babashka is that I've never used it. I heard about it from what I've read on paper, it's very similar. So I think you can achieve more or less the same things. Might be a bit more memory consuming. So if you need to fire up multiple Babashka instances, that may be a bit heavier memory. Well, that's all I can tell. E-Shell, however, I've used it for many years. I've hacked it, I've improved it, I've done so many things with it. I think E-Shell has a couple of shortcomings that might be a bit limiting. For instance, it's not so good with, I mean, Emax doesn't have thread management, doesn't support threading, and this is kind of limiting. So E-Shell, I think the biggest blocker for me using E-Shell is that it doesn't make the distinction between standard output and standard error. And this means that it can actually break numerous scripts or make it impossible to achieve some tests. So there are a couple of more things, but well, then I wrote extensively about it in a few articles on my website, so I invite you to check them out. Cool, cool. Yeah, I guess those were most of the questions. Yeah, pretty much. I was a bit curious, like how does it, I'm guessing starting something interactive, like VIM or even like a get-commit kind of situation would be tricky with this, since you're running it inside of Emax. But the good news here is that it's Emax, so you don't really need it to start with VIM. You already have, you already are in the editor. And for Git, well, in Git you can really, you can tell which editor you want to use, so same thing, if you would write Git, commit, whatever, it will prompt the editor for the commit message, and then you can tell it to use Emax client, which will automatically pop up an Emax buffer in your running session. So it won't be disturbing here. But maybe what you're asking here is what if we want to run NCurses program or something like Htop that uses these terminal visual libraries, sort of. Here I think, well, you can do this, obviously, because this is not a terminal, and there. So what, there is one workaround that Echelle is already using, which is to intercept the command and scan for known visual commands. So if it detects that you're trying to run Htop, it can automatically fire up, so it can deflect the call and fire up the terminal running Htop. And you can do this, so it cannot streamline the call to any command so that it won't ever break. So that's pretty cool. But obviously you lose a bit of the integration here. In my opinion, I think that we should not tie ourselves to the past and keep trying to run VT, 100 terminals. I think we can do better with more sophisticated and modern UIs. So maybe it's time that we wrote Htop in something a bit more modern.
The popular but aging shells (Bash and the like) suffer from many design flaws: lack of structured data, pipes are hard-to-debug blackboxes, lack of interactivity, while the user interfaces are mostly poor and limiting. High time we moved on away from this cruft, starting with a top-notch interactive language boasting full-fledged introspection and debugging. For this experiment I've used Common Lisp and SLY, a Common Lisp REPL and development environment for Emacs.
10.5446/53557 (DOI)
My name is Chris. I'm going to set out some thoughts here about software, choices about software and how those relate to values like minimalism. In doing so, hopefully you will be able to answer the question for yourself, is going to geeks a minimal distribution? And you'll have an idea of why that might matter to you. Just as it was about me, I've been doing stuff around geeks in the last few years now. I got involved around Fozdem back in 2016. First thing, if you haven't heard of geeks, here's a small introduction. I often find geeks also tricky to explain as a project, mainly because I think you can do so much with it. The first thing to stand out about geeks is that it is a package manager. Packages in this case are software packages, and geeks can manage them, so geeks can help with things like installing software. Geeks also include a large number of package definitions, over 15,000 at the time I'm recording this talk. There's a whole range of software in there, we've written in many different languages doing many different things. While geeks work on distributions using either Linux or the GNU heard, geeks also provide the tooling and configuration to form a working operating system. This is normally referred to as a geeks system. So geeks is a package manager with lots of package definitions and it's also a distribution of GNU. That's quite a lot already, but there's more. One not particularly specific but at the description of geeks is a software deployment toolkit. The geeks pack command is a clear example of this. This command allows you to take geeks packages and wrap them into tarbles, Docker images and squash FS images. In general, the geeks pack command can help you to use geeks to deploy software to systems that might not have the geeks tools available. Returning to the is GNU geeks a minimal distribution question, surely with the description I've just given, geeks can't be considered minimal. There's just so much going on. Well, I've described geeks so far, but what does minimal isn't mean, especially in the context of software? I think the properties set out in the call for papers for the declarative and minimalistic computing dev room are pretty good, so that's what I'm going to paraphrase here. Firstly, smaller systems generally take less resources and consume less energy. So resources in this context are things like storage space and RAM. Next, secure systems, that's so easy to understand. Building on this, there's no respect to unminimalism that I want to consider. The Unix philosophy has a few ideas about building software and systems. It's from a while back, but it's still pretty relevant today. I'm just going to mention the first of four points. That point is, make each program do one thing well, to do a new job, build a fresh, rather than complicate all programs by adding new features. Focusing on the make each program do one thing well bit, I think that's really relevant here, especially in the context of minimalism. But how do you determine if a program is doing one thing? Fundamentally, all tasks can be broken down. You can see this in the way software is often written. Programs can be broken down into small, reusable parts, often called procedures or functions, and those do one thing. And code within those functions can be further broken down with each line doing one thing. If you're familiar with writing software, you'll know that how you break down a problem into code to tackle it is very important, not just for solving the problem, but also doing so in a maintainable manner. Going back to the question again of whether Geeks is a minimal distribution, I think it's relevant to ask if Geeks does one thing. And if so, what that is? The first word answering this is no. I listed some of the things that Geeks is earlier, a package manager, a distribution, plus some tools like Geeks pack. But I think there's a better perspective on this. I think there is one problem that Geeks tackles, and it's the problem of software deployment. Now, I mean deployment here in a very general sense. Often you deploy to some computer you're not sitting next to. But I think getting software running on your local machine is deployment as well. You're still getting some software and making it work. Because Geeks can be used to manage systems, manage software and manage service configuration, configure whole systems and generate things like system images or software bundles in various forms. Geeks can handle a very broad range of software deployment tasks. These are tasks that in the past would have probably be handled by separate tools. I think the consolidation that Geeks provides is a real step forward. And there's something that just happens naturally in a lot of places over time. In computer hardware, for example, features that you would have once found in separate hardware like sound cards or network cards, that functionality has been consolidated in the motherboards. Aspects of motherboards have been consolidated into processors. And generally, this happens because the advantages of the consolidation outweigh the costs. That's kind of why this might be happening. But what I want to do is really articulate what the difference is. And to do that, rather than just stating it, I'm going to show an example using the Ruby on Rails web development framework. I'm using this as an example because I've been working on using it for the last few years and I've also worked on packaging it for Geeks. So this isn't an example you should follow. I'm just using it to try and set the scenes for tackling a problem without or with minimal use of Geeks might mean. I'm roughly following the installing Rails section and the getting started guide. First, there are four prerequisites. Ruby, SQLite, Node.js and YAR. I'm going to use Geeks, provide Ruby, SQLite and Node. The Geeks Node package includes NPM. And since Geeks doesn't have a package for YAR, I'm going to use that. I'm going to install YAR with NPM. To try and limit the impact on the rest of my system, I'm using Durum to manage the environment. What you're seeing now is the installation of Rails with the RubyGems package manager. RubyGems is the language specific package manager for Ruby. Gem is the equivalent term for package. This is taking some time as it's actually built. Built some gems now and it's installing documentation. After this finishes, what I'm going to do is run the command Rails new blog. This will create a new Rails app. That will do a few interesting things. A second tool related to managing RubyGems will come into the mix called Bundler. While RubyGems is a package management framework for Ruby, Bundler is a tool to make sure, try and constrain the gem versions that your Ruby application uses. This is me quoting from the Ruby files for these tools. That's not going to be the end of the package management going on here, though. The Rails new command invokes YAR. That is used to install something called Webpacker. That pulls in like 600 dependencies. So here is the gem. This is Bundler, I think, now. Which is using some existing gems that have been installed and installing some new ones. I wanted to run this in real time to give a sense of the time it takes for this process. Some RubyGems come with native extensions. Those often take more time to install because you have to compile that native extension code. Back to the bigger problem of installing Rails, there are a number of different tools doing different things here. It is possible to describe each tool as doing just one thing. Reeks was used to install prerequisites, apart from YARN. RubyGems was used to install Rails, these parts of it. Bundler came into the mix to provide a more strict way of using gems. Finally, YARN is going to be used to install Webpacker. It is taking me less time to say what I was going to say about this and the time it is taking to actually install things. At least this is my example of a situation that can be improved rather than the way to go. I think it is getting there. This is now on to store using YARN to install Webpacker and all its dependencies. Nearly there. Done. Wonderful. What are the disadvantages of this fragmented approach using all these different tools? Firstly, you as the user are spending time dealing with several different tools. Four or five in this example, depending on how you count. They don't really interoperate. Rails runs Bundler and YARN, but that is not really interoperation at the package manager level. Next, you have also got a lot of state to manage. To illustrate this, I am going to show you some of the files that were created when I ran through creating this example Rails app. First, this is the gem file. This is used by Bundler and works on top of Ruby gems. This file says something about the Ruby version. That is just a check. There is also a.ruby version file that has been created. I am not sure exactly what that is doing here. This file lists the gems that I wanted along with constraints for the dependency resolver on the version. Next is the gem file.lock. This goes along with the gem file. The gem file is what you want to edit. The gem file.lock is updated when the dependency resolver is run. It is a bit longer than can fit on the slide. Switching tools. This is the package.json file, which is used by the node package manager. In this case, used by yarn. Like the gem file, this includes a list of packages with constraints for the dependency resolver. Just a different dependency resolver than the one I mentioned previously. Like the gem file and gem file.lock relationship, there is a corresponding yarn.lock relationship, which stores the outputs from the dependency resolver for the package.json file. This is a lot longer than the gem file.lock since it tracks 600 packages, I think. It includes hashes to check the integrity, which is a nice feature. That is the four files, gem file, gem file.lock, package.json and yarn.lock. There are also geeks providing some of my plenty of dependencies here, which I have not described tracking. The problem I am getting at here is that each of these tools has its complexities and its state. By using multiple package managers at once, all of this complexity and state is shifted onto you, the user. Next, I want to talk about the limitations this fragmented approach imposes. You have got the cost of two dependency resolvers here. All the resulting state you can attempt to reproduce the setup in the future on different machines. You cannot look at dependencies across these fragmented ecosystems. The RubyGems resolver is not going to pick gem versions that work with the libraries you have got through geeks. Similarly, yarn is not going to help you pick JavaScript libraries that work with the Ruby code you have got. This is just one example of how this breakdown of the problem roughly along the lines of programming language makes it harder to provide a good experience. All of this means that you are discouraged from picking software that is not available through the limited package managers you are already using. Ignoring geeks in the wide range of software it provides, RubyGems and NPM are not going to provide you with tools for a synp Python or Rust or Java for example. With this fragmented approach to package management, you will be discouraged from picking the best available tool if that is not available through the package managers you are using. Finally, there are risks of being dependent on multiple tools. Bugs or security issues in these tools might impact you. Both RubyGems and NPM are dependent on remote package repositories, with these services comes the potential for downtime that prevents deployments. So with all of that said, what is an alternative? What would be a minimal approach to software deployments? Firstly, take a singular approach that works for all of the software you are using. Then you are not spending time trying to manage and use the interaction of multiple tools. Then, the same principles that apply to state management in a program apply to deploying software. Rather than running dependency resolver, there are even multiple dependency resolvers that are storing the results so you can use that state later, instead declare what dependencies you are going to use in a concise and reproducible manner. Before starting to use Geeks, I thought that dependency resolvers were a core part of package managers. Now I think that in many cases you would be better off without trying to use dependency resolver, especially at the time right before you try to install packages. Next, I think it follows from taking one consolidated approach to software deployment, tooling can be built around this that isn't limited by spitting down the problem along the lines of programming language. As a small example of this, this is the visualized output for the Geeks graph commands for the RubyRailTies package. You probably won't be able to read the text, but each box represents a built package output and the lines represent references. So the dependencies between these outputs are the lines. Because Geeks has all the information about relationships between the different packages, the Ruby packages are on the left and they are joined at things like Ruby, kind of in the middle and some of the other libraries around the top right. I don't think being able to generate comprehensive graphs is a particularly enticing example of how the mineral approaches have been beneficial, but it is one of the more visual examples. I think the Geeks pack command is more generally relevant. Being able to generate tables, docker images and other software bundles makes it possible to get some of the benefits of using Geeks, while not depending on Geeks being available where you're deploying software to. The complete view of packages is now very important for making Geeks pack work, as it means Geeks knows what to put in the pack and what it doesn't need to. From a more futuristic perspective, I'm really excited by the insights that Geeks can provide about software you're using, which is really important for keeping track of vulnerabilities that you might be impacted by. This is simplified by having one comprehensive source of information about what versions of software you're using. Moving on, using a general purpose package manager that isn't specific to one language or one domain means that you're no longer discouraged from using tools written in other languages or dependencies in other languages. You can pick the best tool, not the best tool but written in the same language as you're using. Finally, with one approach to software deployment, hopefully it'll be easier to understand and more reliable. Particularly with Geeks, there's the git repository to fetch updates to Geeks itself and the package definitions, but that's not required in performing operations. And there's the option of using a mirror of the Geeks repository instead. Prior to saying even more positive stuff about Geeks, I want to do some management of expectations. Geeks has many high quality package definitions, and there's only a few cases where Geeks doesn't build things from source out of necessity. Unfortunately, the trade-off of this approach is that there's still much work remaining to package a lot more software. JavaScript is a particular example of an area where there's not that many packages. I'd expect most of the 600 packages I mentioned earlier that Yarn installed for the demo are not one's package for Geeks. Geeks also only supports a number of architectures, so that's something to check as well if you're considering using it. I don't want you to discourage you from considering Geeks, but do keep this in mind when you're considering to use it if you'll be able to use it immediately. Packaging to offer for Geeks is a community effort, so you can help out there as well if there's things you're missing. Back now to what makes Geeks so good for deploying software. As I mentioned before, the package definitions that Geeks has are rigorously built. They build the software from source with only a few exceptions, and in most cases they run the tests against the built software. Geeks is great at orchestrating building software from source, but this isn't something you necessarily want to be doing all the time. That's where substitutes come in. They're literally substitutes for building the software locally. Geeks as a project provides a build farm that builds Geeks packages for multiple architectures, and you can download these built substitutes. You can also distribute substitutes yourself for tools like Geeks Publish. This point is also a notable difference from package managers mentioned previously. Roogems doesn't have such a general approach from providing built software, and while that's not relevant for the vast majority of Ruby packages, there are some quite compiling, native code upon installation, which can get frustrating when it happens again and again. Having just one package manager allows you to solve these problems once and solve them well. Next, Geeks is dependable. It has good security properties, commits assigned, and no signatures or checks when you put in updates. Rollbacks are detected, which help prevent rollback attacks, and substitutes are assigned, which helps provide some security when you're fetching them. There's some security built in for building the software source as well. You're at least then trusting the source, Geeks and Geeks, rather than the source, Geeks, and whoever built the software. Geeks also comes about many of the features that make other package managers less secure. Arbitrary code within packages isn't executed when you install packages, which is something that's possible with a lot of other package managers. Geeks is also less dependent on network services, compared with a lot of other package managers, which leaves single points of failure. Finally, the rich set of features that Geeks provides means it can comprehensively tackle software deployment problems. If you can avoid using a distro, a different configuration management tool, several language-specific package managers, and then tools providing isolation on top of those, and replace all of that complexity with Geeks, you can hopefully spend a lot more time on doing what you're trying to do, rather than spending that time trying to deploy your software. So is going to Geeks a minimal distribution? Yes, you can use it to consolidate your approach to deploying software. If Geeks has the packages and features you need, and you're not already using it, adopting it might save you time in the long run. Even if Geeks doesn't have all the packages and features you need, you might still be able to get some value by using it, and then maybe that will give you the encouragement to start contributing. So if you're interested in learning more, the Geeks website is probably a good place for information. You're active mailing lists and an RLC channel. Finally, I'm just generally really excited to see you, excited by the potential I see in Geeks, and I hope you now are too. I think we're going live now. I couldn't see any question. Yes, but if you have any, please just ask. I think everyone is just saying that your talk was great, and they were talking about Emacs versus VI, and also some stuff about NTM. But I don't think I've seen any question directed to you, precisely. So again, as last talk, if you have some questions and you want to continue discussion, there's a specific room for the talk that is going to be advertised. So I see a question, maybe you can answer that. Is there something similar to Nix Lakes in the Geeks? If you know what that is, then maybe explain what Nix Lakes are. I don't really know what Nix Lakes are, but I think there are some similarities, maybe, between Nix Flakes and Geeks Channels. I think Nix has a concept of channels as well, but I think the Geeks concept of channels differs from the Nix concept of channels, and actually Geeks channels are somewhat similar to Flakes, maybe. But yeah, I don't really know much about Nix Flakes, so I'm not a great one to speak on that subject. Okay, so maybe the last question. Yeah. Okay. Yeah.
Minimalism is a useful perspective in software projects, and this talk will explore how minimalism and the related concepts of scope, convergence and efficiency seem to apply to distributions. My current focus is GNU Guix, but in this talk I'll compare and contrast with other distributions as well.
10.5446/53562 (DOI)
Good afternoon, my name is Arun. This is my submission for FOSTEM 2021. This talk is about a stiff, a stiff program for list S expressions. List code is data. List has a list like tree like structure that is trivial to parse and manipulate. List source is almost literally the abstract syntax 3 of the language. Automated source manipulation tools are really easy to write but strangely few have been written. In this talk I present as a stiff, a stiff program for S expressions. The Unix world treats files as a flat list of lines. Thanks to the legacy of Unix, Unix philosophy and what not, most shell utilities operate on lines. For example, Knudiff compares two versions of a file and outputs the difference as a list of lines to be inserted and a list of lines to be deleted. We are all familiar with output that looks something like this. The red line find updated URL source URI has been deleted and the blue line any updated URL source URI has been added. List projects use diff tools too but there is a bit of an impedance mismatch between line-oriented diff behavior and S expressions. For example, can you spot the actual change in the following diff? It seems as though four lines have been deleted and replaced with five lines but in reality there is just one line added. All the other diff lines are due to change in indentation. So what we really need to see this is not a line diff, we need a tree diff. Surprisingly, tree diff is a difficult problem. Extracting semantic meaning is always hard and the general solution would probably require some kind of AI. But fortunately, we can approximate the solution by posting it as an optimization problem. For unordered trees where the order of the children of each node is unspecified, the problem is NP-hard. We only deal with ordered trees. S-diff implements the MH-diff algorithm. MH-diff stands for meaningful hierarchical diff. This algorithm was proposed in the paper Meaningful Change Detection Instructed Data by Sudarshan and Hector. The MH-diff is a very complicated algorithm and I won't go into the details. Rather, I will give a very superficial overview of the method. MH-diff supports six operations, insert, delete, update, move, copy and glue. Each one of these operations comes with an associated cost, CI, CD, CU, CM, CC and CG. All of these are constants except for the update cost CU which is a function of the old and new values. MH-diff poses the diff problem as an optimization problem. The goal is to find an edit script such that the total cost is minimized. MH-diff operates in two phases. First, match the old and new trees and then use this matching to extract an edit script that can transform the old tree to the new tree. The edit script is simply the list of operations to transform the old tree into the new tree. Consider the two trees on the screen. On the left is the old tree and on the right is the new tree. To our very visual brains, it is immediately obvious that the star node has been changed to bar and nothing else has been changed. But the computer cannot see this so easily. So, we need some kind of algorithm to figure it out. MH-diff does this matching by constructing a complete bipartite graph with the old tree nodes on the left and the new tree nodes on the right. Each edge in this complete bipartite graph is a potential matching of the old and new trees. Bipartite graph is a graph with two partitions. The nodes in one partition are connected to the nodes in the other partition, but there is no edge connecting nodes of the same partition. So, here A, B, C are one partition, E, F, G are the other partitions. So, each edge in this graph is a potential matching of old and new trees and comes with the cost. The goal is to prune the edges to something like you can see in the figure now such that the total cost is minimized. This is known as the minimum cost edge cover problem and can be solved using the Hungarian algorithm. Demos now for the fun part. Let's fire up a shell. Here we are. So, this is the STIF source code. I have prepared some examples. Let's look at a really simple one to begin with. So, foobar old.scheme. This is a one line file and this is the new version of the file. The only difference is that foo has been changed to bar. Let's see what lined if has to say about the difference. So, foobar old.scheme, examples foobar new.scheme. So, lined if reports that one line has been deleted and replaced by another line. Fine, but STIF can do a little better. Here we go. So, we see that foo has been the word foo. Only the word foo has been replaced by the word bar. So, that is a much clearer diff. Now, that was really trivial. Let's try something more complicated. So, let's try this file inputs old.scheme. We see that. We see this contents. Now, the new.scheme and what does lined if have to say about these two files. old.examples.new.scheme. So, we see here that one line in the old file has been deleted and replaced by two lines. That's not so bad, but we can do better with STIF. STIF on the other hand, clearly indicates that the only thing added is the ghost script or the two ghost script elements. That's much better, isn't it? Now, some more examples. Now, let's look at an example where lined if files get a stop. So, contents of guile old.scheme are like so and new.scheme. So, if you ask lined if what the difference is, it says that all the lines have been deleted and replaced with totally new lines, but that's not really true. There's on cache initial instruction, we see that there are indeed many lines that are not been changed. So, maybe as it is, we can do a better job. Let's see. There we are. All that is saying this is an added package with extra patches function at the beginning and patch at the end. Lined if was thrown off because of the change in indentation due to adding a function at the beginning. All the lines were intended one or two steps in and that really threw off lined ifs, different detection. So, this is a profound example where as if does much better than lined if. So, maybe you can do just one more example. Let's do rubber. So, you might have noticed that, most similar with geeks might have noticed that these are all these are geeks code snippets and this is the package definition for the rubber package. Let's ask color if what the difference is and rubber. So, it indicates something. As we should, yes, this can do much better. So, as if is slow for files for large streets with many nodes. That's a it really needs to be optimized to be faster. So, there we go. So, you see here the red highlighted parts are the ones that have been deleted one point version hash, build system field and the inputs field and version has been changed, the hash has been changed and the build system has been changed. It's much clearer and we immediately see what the changes without having to manually dig through and compare each line. So, that's it for the demos. Going forward, what still needs doing? Instead of isn't quite ready for everyday use yet, there are plenty of bugs to fix and a lot more testing is necessary. Maybe the cost model should be improved and more operations such as move, copy and glue should be supported. Lisp syntax is not as regular or syntax less as is often claimed. It does have a bit of irregular syntax such as quoting line based comments, etc. And these need to be dealt with for a proper diff program. The diff output needs to be cleaner, more concise and optimization is really required because as you saw it's quite slow on large streets. Finally, it should integrate and replace tooling such as git diff. That is the long term goal of sdiff. And sdiff could also be used to compare other non-list s expression data such as the data files used by the Libre PCB, electronic printed circuit board design software. Thank you. Thank you for listening. The code is available under the gplv3 at the URL. Feedback and criticism is welcome. Would you use sdiff? How can sdiff be more useful? Be sure to let me know via email or in the Q&A session. Thank you. Okay, we are live. So I see there are some questions already. So there's a question from Johanna asking what parts of the implementation seem the most important to organize to make these more usable for larger trees? Yeah, so there are two parts. So the the image diff algorithm itself is order of n cube, but my implementation is probably worse. There's a lot of scope for optimization. One is the Hungarian algorithm. So we need to find the minimum edge cover of the bipartite graph and I use the Hungarian algorithm to do so. But there may be better algorithms which are cheaper. So the Hungarian algorithm is order of n cube. There are probably cheaper algorithms than that. Another is the tree traversal. So the notes in the trees are addressed with numbers. These numbers are just the order in a, they simply the order, the index in the pre-ordered traversal of the tree. So currently I traverse those trees multiple times to get those numbers and probably should be done only once. So there's plenty of scope for optimization there as well. So another question is how does your tool handle comments and changes to indentation? Right now it doesn't. It simply ignores the comments and any wide space differences. It just uses schemes default reader and that does not see the comments at all. So we'll implement some of the reader which will be able to read the comments and any wide space differences. Maybe the guile reader could be of use there. Okay. Another question which is maybe heretic. Could these be useful for other tree formats by converting them in and out of S expression such as for instance XML? Yeah, it could be used for anything that can be converted to an SSL, like SSL expression. So I'm not sure what exactly the difference between XML diffs and SSL diffs. So many algorithms and papers talk about the XML diff. So is there anything specific about XML in them? And then we have another question asking is the source code released anywhere? Yeah, there was a link at the end of the last slide and it's also there in the first term talk page as an attachment. I uploaded the slides and the source code. But there is no public repository asset. I don't see any other question that. Oops. Oh, so another question. Is there an espatch tool in the works? Not yet. So for that, as people have to have to be standardized and I'm not sure how like, if the original source code for which the difference generated in this, then how critical will that be to espatch? So will espatch still be able to apply the diff correctly? I haven't looked into that. So for now, there is no espatch. And so the last question in the chat is how can we contribute? I mean, once there's a public repository, you could probably send patches, but it's not the core is not in such a state right now. So maybe you can, you can watch out on the guy will use the mail English style. I'll be another six months. I will post post some announcement about the public repository. So that's what we should look out. Okay, so thank you for the talk and answering the questions. I don't think we have any other question. I hope I didn't miss anything. So we still have some time for questions. So if people have any question, I think we will still have two minutes. So maybe if no one is asking a question, I have a question of my own. What is the glue operation? So the glue operation, I don't really know to be honest. So I mean, I didn't implement that and I forgot what it is. So I'll have to look at the paper again. It's supposed to complement the move and copy operation somehow. The parts I have not implemented. Another question, is there also a text rule or S expression output? Pardon? What is it? Is there a text rule or S X outputs? What do you mean by that? So does your tool give you some only some
Lisp has a wonderful minimal syntax that almost directly expresses the abstract syntax tree. Yet, diff and other tooling operate on the unix newline ending model. When lisp prides itself for its minimal syntax---code is data---that is easy to parse, why can't we do better? Traditional diff implementations, such as GNU Diff, treat files as a flat list of lines. A tree-diff algorithm that can produce minimal and semantically meaningful output is a surprisingly more difficult and complex problem. In fact, for unordered trees, the problem is NP-hard. In this talk, I will demonstrate a very early working prototype of an S-expression diff program. The program can operate on two versions of some lisp source code and extract a meaningful tree-diff. The program aims to replace 'git diff' and related tools for lisp projects.
10.5446/53371 (DOI)
Hi, everyone. So my talk today is about eating your own dog food. So we've all heard the term dog fooding, but this talks about building a product using WebRTC for broadcasters, and it's called Broadcaster BC. So we've only got 20 minutes, so I'm going to get on and hopefully hit my 20 minute mark. So if you hear some rain above my head, it is the middle of winter in the UK. So I'm hoping the OBS and the noise filters will clear it all out. So let's get going. So a little bit about me if you don't know who I am. So I'm Dan Jenkins. I'm a developer and architect of real-time communications projects. I'm the founder of a company called Nimbleape, and I created a real-time communications conference in the UK called Comcom. I build Lego as and when I can. I'm the chief ape at Nimbleape, and I created something called Broadcaster. And as I keep telling everyone, I'm a web developer who just happens to do real-time communications. There aren't that many of us. And I'm Dan and Scott Jenkins on Twitter. I do enjoy a couple of tweets during my talk if someone wants to tweet me. So Nimbleape, we were founded in 2013, real-time communications consultancy, and we do a load of open source VoIP and we do a load of WebRTC. And that's native using React Native, for example, or Flutter, as well as Web. And then we build and we use open source. So let's talk a little bit about Broadcaster. So in short, Broadcaster is designed totally for Broadcasters as well as A-B companies so they can stop hacking solutions around bringing in remote participants using Zoom, MS Teams, Skype, insert other conference service here, and tools such as VMIX, which is a professional editing suite. This isn't a commercial talk. Yeah, I don't do commercial talks. It's borderline app arts, but hopefully you'll come away with it, come away from the talk feeling as though I did it justice. So it's not commercial talk. And if anyone feels differently, then holler at me on Twitter and tell me I was wrong. So where did this story begin? Every single talk has a story to it. And it all started because I hated seeing the Zoom UI on news reports. And basically the BBC in the UK during the pandemic, pretty much every single remote interview, you could see the Zoom UI. And for me, this just crossed the line. Public Broadcasters should not be openly advertising commercial companies. There is a reason why, say, when news reporters or TV programs use Apple laptops, they cover up the Apple symbol. But for some reason, this was perfectly fine. We were able to just see the Zoom UI. And for me, even though some of the time they said Zoom, some of the time they didn't, but for me, anyone that's actually used any kind of conferencing service during their lifetime slash the pandemic, this was essentially rubber stamping Zoom as, well, the BBC use it. So why shouldn't I? That's the way I saw it. And I also ran an event last year called Con Con Virtual 2020. So we ran two in-person events in 2018 and 2019. And then it had to move virtual for 20. We weren't actually going to run an in-person event in 2020. And this is why it actually took a lot of knee-jerk reactions, let's say, to get us to a point where we've actually run in Con Con Virtual. So in the picture, you can see Lorenzo and from the Meet Echo, the Janus team in the top right. And they're on MI on the top left. And then his slide deck was in the bottom left. And then that's our AB team in the bottom right. Then you can see that we actually used JT for screen sharing. And I'll tell you about that in a minute. So I found Con Con Virtual was like this huge teachable moment for me in terms of how to run a virtual event. Not just in terms of logistics of the fact that people weren't meeting in person. It was more about how do you actually run a virtual event and make it good? Because up until that point, all that I had seen of virtual events was people like 50, 60, 100 people going on Zoom. And there was just this wall of people watching this one person talk. I was like, why do you all need to be on Zoom? I don't need to see bobbing heads. I don't need to see 100 bobbing heads on Zoom. It's just the most ridiculous thing in the world. So it was this huge teachable moment for me. And we actually only, like I said, we only decided to actually do it about six weeks before it was meant to run. So and that included getting all the sponsorships, getting all of the talks ready, recording all of the talks, doing post-production and everything, and then actually running it. The fact that I just counted all of that on five fingers tells you how very little time we actually had. So I had plans to sit, like when we were thinking about doing it like months and months ahead of time and like the pandemic could hit and we were like, it's kind of right. We should probably do it. It's like if a real-time communications conference can't happen, then like no one, we shouldn't be talking about what we do because like this is the time for real-time communications, right? So I was zooming and aring and eventually, yeah. But ultimately months ahead when I was zooming and aring, I was thinking, well, we could buy some cameras and we could send them all around the globe. And then like we could get all of that footage sent back to the AV team and they could just post-product. But I simply ran out of time to do all of that. So we got our previous AV team from the previous year to help us and they ended up using VMIX which is this professional tool for editing video. And it comes with real-time participants and it actually uses WebRTC for it. But it's not very good. But it gives you very, very little control. It didn't give you the basics of being able to choose which camera you wanted. You had to use the browser UI to be able to do that. And anyone that's actually tried to use the browser UI and say Chrome knows how impossible it is, right? It's actually really easy to do in Firefox because they ask you at the beginning of every single call. But actually changing your device in Chrome is actually quite difficult. So yeah, they didn't have an in-built UI using the in-built browser APIs. So immediately I was like, oh, WebRTC, this is great. The thing that we're talking about at the conference, we're using it. And then when we actually started using it, it was just a massive, massive, massive pain. And ultimately, VMIX acts as an MCU. And so it mixes everything. And so in that video, that picture, all of my audio and my video was going to the AV team that was somewhere else in the UK as well. And then Lorenzo's audio and video was going to them as well. And then they'd all get mixed. Ultimately, it's really cool because you can send different mixes to different people. But there's this delay, this inherent delay. And ultimately, it meant that actually having, say, a panel conversation was near impossible. Me and Lorenzo, who we know how WebRTC works, we know how media works, we were constantly talking over each other whenever we needed to talk to one another. And that was the same with all of the participants, not just Lorenzo. So yeah, I was like, surely there must be a better way, right? And all the way through, the AV team was like, ah, there's this great thing called NDI. And Zoom's meant to be having it soon. And Teams is meant to be having it soon. Blah, blah, blah, blah, blah. And so I kind of got interested in it and then didn't really do much with it because we thought Teams was going to have it. And we thought Zoom was going to have it. And Teams does have it, but it's not very good. Anyway. So what is NDI? I'm ranting. What is NDI? NDI stands for Network Device Interface, which is a rather broad term, I think. But ultimately, NDI is a protocol and a codec. And it allows us to send low latency, high quality video and audio over a network. And there are a load of hardware as well as software that understand how to receive and send NDI. And then the NDISDK itself is royalty free, which is quite cool. But it wasn't designed to work over the public internet. There's another thing called SRT, which is designed to go over the public internet, but that's a whole other thing. And it has its own challenges as well. So we come to WebRTC. We're 10 minutes in. I've got 10 minutes left. So we come to WebRTC and ultimately, the thing about WebRTC is it works in almost every single web browser on a mobile device, let alone laptops, let alone desktops. But pretty much works on any browser on any mobile device. And WebRTC just kind of works, right? Whereas NDI, NSRT just don't work over that. And that's ultimately the power of WebRTC. The fact that you can just open up a browser, and until recently this wasn't the case, especially on iOS, but now you can pretty much open any browser on a modern mobile phone or tablet. And you can go to a URL, and if it's a WebRTC session, then you'll get joined into a WebRTC session and it will just work. Whereas to be able to do anything with NDI or SRT, for example, then you're actually having to download a native app. And then WebRTC gives us the ability to bring in remote participants and not have to mix things because we can just forward streams to different places and it gets away from all of those annoyances with VMIX, for example. So that was a longer story than I planned. But the super short version of how Broadcaster got born was because of ComCon virtual, we got interested in NDI. And then I was talking to Lorenzo about it over with the Meet Echo team, and because I talked to him about it a lot, I piqued his interest. Or maybe I just bored him to death and he thought, Dan, I'll just do this and you can get off my back finally. But what happens when you pique a geek's interest? They go and build something. So the first image on the left is the initial announcement on the July the 17th. Considering I think ComCon virtual was a week before that, maybe two at the max. Within a week or two, Lorenzo had gone away and built a basic proof of concept plug-in for Janus to output NDI from Janus, which is pretty cool. Fast forward to November the 27th and I announced Broadcaster using Meet Echo's NDI Janus plug-in. So obviously we'd been working together up until that point. Well, we still are, but we've been working closely with one another, trying to progress things during that time. So Broadcaster VC is built off of Janus on the top of Janus as an SFU. Janus is actually running in two parts of it. And then it's also built off of Meet Echo's Janus NDI plug-in, just to be super crystal clear about this. The NDI plug-in itself is not open source, but it is licensable. I don't begrudge the Meet Echo team at all about this. Everyone wants to put food on the tables, right? And you can only do so much open source. Yeah, you can only be good guys so much. We all run businesses. We all need to make money. And I think this is a great opportunity for Meet Echo. And totally, if people are making money out of a tool, then the creators of that tooling should also make some money off of the top of that. But ultimately, the rest of Broadcaster is built completely off of open source. And this was going to be how we were going to be recording the session. I was going to have my Mac in front of me. There's a whole other story about that. We'll get to in Q&A probably. But I was going to have my Mac in front of me. And I was going to have Keynote open. And then I was going to use Broadcaster to broadcast a screen share of my talk and my video and audio over to another machine in the corner of my office, which was then going to record the session. And it was all going to use Broadcaster and NDI and OBS. But there's actually a problem within OBS. And as soon as you have more than two NDI streams with audio, you get audio started. And I just didn't have time to figure out what the problem was or move over to, say, trying to use Vmix or any of the other tooling. So that's why I decided to end up not using Broadcaster, which is a bit of shame. So let's take a quick look. We're 16 minutes in. I've got three minutes left. So let's take a quick look at Broadcaster and how it's built. So you can see top left. We've got our remote participant. Think of that as the person that's being interviewed. Top right is the person that's interviewing. And then we've got our video editing suite at the bottom. And so you can see there all of those streams come in as NDI sources. And then also the Janus plugin can emit this test pattern as well. And you get individual streams for individual sources in. And ultimately, if you have five webcams with five microphones, then they'd all come in separately. I said earlier that MS Teams supports NDI. It does, but it only supports NDI streams for video. And then it gives a mixed audio feed. So it's actually pretty pants for AV teams. So how does it all work? So this is, this has got to be right. So it's quite simple. And that's quite scary to say for someone that's going to try and make some money with this. Web browsers essentially send their WebRTC streams up to a WebRTC SFU in the air quote cloud. And that can be anywhere in the world within different regions or even within people's own lands. And then we've got a Docker image that runs inside of their network and generates these NDI streams. Like I said, NDI doesn't work through the public internet. Has to be within a natural land. And basically WebRTC gives us all of these great things. It gives us natural versal. It gives us nice scalable video coding with different qualities at different layers and things like that. As well as the fact that it all just works just in the browser, which means it pretty much just works universally. So crypted by default, as we know, means we can do awesome things like just saying, well, bandwidth is a bit low. So let's make sure that we prioritize sending audio over video. We don't need symmetrical media. We're using Opus, which means that we can handle packet loss and we can use AV1 and the other royalty free video codecs and using new advances like Red Audio and all of these new APIs. So definition of dog fooding is basically using your own product before it's made available to customers. And that's what we've been doing. We've been using Broadcaster to help people run small events, slash blogs, slash whatever. And ultimately, unless we're using the technologies we speak so highly of, why should anyone else? And hopefully we're going to start seeing less of the Zoom UI and more of nothing, more of just video. I mean, that's pretty much it. I mean, if you're expecting more, then basically my message is a super useful service doesn't actually have to be super complicated. And yeah, there's some really clever tech in here, namely the Janus and the NDI plugin. But ultimately, something can be really, really simple too. As long as it gives value, then ultimately it's a great business avenue, a great project avenue and a great way of giving back. And yeah, Broadcaster VC is currently in active beta with missing features and lots of bugs. Thank you very much. I hope to see you all in the Q&A. And that's me. Don't be so self-deprecating. It's great. Okay, coming up in a few seconds, switch to the room and then the live stream here. Great. Get 502 for the question spot. Perfect. Okay, we're live with the questions done. So what are the questions? Looks like the bot has gone bad again. We will try our best here. Let me first do something so that the Q&A, what the hell? Why have they? So how are you doing that? I see a question from Tim. So does that many BBC staff are working from home impacts the design of Broadcaster? Not really. At the end of the day, you want everything to just work and therefore, like the stuff that works within the network for say, bringing into a video editing suite just needs to work whether or not you're at home on crappy broadband or whether or not you're in a professional studio. So it's all designed so that it just works. So you don't need public IPs, you don't need port forwarding, you don't need any of that. So it didn't really change how we approached making Broadcaster at all because it should just work without any setup other than running one command basically. Right. I followed up on the open source and the online library that the semi-linear view of the state folks are working on. Yeah. So I've been keeping up with it quite a bit. Well, I say keeping up with it other than them announcing that they were going to do it and starting to do some stuff on it. That actually hasn't been a lot of public progress on it from what I can see at least. So I've been keeping up with it from what's happening with it, but it doesn't seem like much is happening with it. I'd love to be told wrong because I think that's a really great thing because the NDISDK and everything is license free, but it's not really open source, whereas the VLC one would end up being that which would give us greater.
Seeing Zoom used for interviews and "virtual audiences" throughout the pandemic was humiliating for those of us who build projects and products with WebRTC. There must be a better way; and there is - building a WebRTC platform to generate feeds that broadcasters and event producers can consume as they see fit - no need to show Zoom's UI on TV any longer! This is the tale of how and why we built the service that's been used to record all of the RTC track sessions at FOSDEM.
10.5446/53372 (DOI)
Hi, my name is Sean Dubois and I'm here to ask the question, can we make WebRTC easier? Before we get into that, this is my background in the RTC space. First off, I started the Pion project, which is a collection of GoRTC libraries. They're designed to be building blocks to build your vision. The idea is that Pion fits into your existing system. We want it to be flexible and it's a community-owned and non-commercial. The first project was Pion WebRTC, which is a peer-connection API in peer Go. It's the API as you expect in the browser. You call create offer, you call add track, you create answer, but instead of having a tightly coupled media pipeline, instead you control it. We provide Pion media devices so you can call get-users and media like you expect, but we also provide other ways like you can read from a file or you can bridge with RTMP or RTSP. We also provide a setting engine, which allows you to set proprietary Pion-specific options. You can control MD&S and other things like that. The last concept that is specific to Pion is interceptors. You can actually set your own congestion controller, you can set your own NACs. It's basically a pluggable RTP and RTCP pipelines. If you're familiar with Gstreamer, the design is heavily influenced by that. To other major libraries, we wrote a turn as an API in peer Go. It's both a turn server and a turn client that you can embed in existing applications. It's a library, so you get callbacks for authentication. You can plug into your existing auth systems. It's not tightly coupled with Redis or an existing SQL server. You can bring your own logger. You can run turn and HTTPS on the same port by reading the first couple bytes and then demuxing and deciding which way to go. You can embed turn in an existing application. Then, one of our biggest projects is Ion, which is a cluster-based system for building real-time communication. The idea is that you have a Docker container for each one of your processes. You can have an SFU instance, you can have an AVP instance, which processes real-time media or a live instance, which does the protocol bridging. You can set RTMP, HLS. Then the more recent project I've been working on is WebRTC for the curious. It's an open-source book on how WebRTC really works. It's not just about calling the JavaScript APIs, but it's a deep dive on the protocols and how WebRTC actually works. I'm also working on getting interviews with the RFC authors. I was lucky enough to talk with Ron Frederick, one of the co-authors of RTP. I've been able to talk to anyone else yet, but if you do know them, I appreciate a shout-out and convince them to give a little email exchange. It's super important to get that history. Then WebRTC in practice, we want to teach people how to debug. It's a shame that people have to re-learn this stuff. The first half of this talk is informed by questions that I've been asked while working on WebRTC projects. We run a Pion Slack where we have people come and they'll say, this doesn't work. I don't understand. A lot of the questions aren't even really related to Pion. They're more general WebRTC questions. First, people don't understand what WebRTC even is. First off, you go off and most of the docs are for the JavaScript, the API defined in W3C. If you search how to do this with WebRTC, it doesn't talk at all about the actual protocol, how do you get WebRTC working on the outside the browser? Then if you dig a little further, the definition of WebRTC isn't even consistent. If you go to the ITF, they define WebRTC as a protocol. They'll have these collection of docs and it'll say it's a bundling of ICE and DTLS and SRTP. If you go to the W3C and a lot of the public internet, it'll say it's the JavaScript API. If you go to WebRTC.org, it'll say that Google's implementation is what WebRTC is. The definition is all over the place. To get a good sense of this, just compare the definition on Wikipedia, WebRTC.org and the MDN and WebDocs. No one agrees what WebRTC is. The next is, WebRTC is when you're building stuff, it's hard because you have to anticipate the problems that are going to happen down the line. I have a lot of people that will build things and then when they go to deploy, things go wrong and it's very frustrating for them. They'll hit into network topologies and I can't tell you how many times I've asked, are you running a turn server? People don't anticipate this because they build it in a lab and it works perfectly, but then they go to deploy it to the production and everything blows up. Very frustrating for everyone. Then you have that not all clients support H.264. They build this wonderful RTSP bridging system and it works great. Then they go out to the production and you find out that these Android devices, they only support VPX. One customer reports bad video and you say, what is your plan around congestion control and error correction? It blows their mind. They're like, I didn't consider any of this. I just want to send a video over the internet. Why do I have to worry about this? It's frustrating. I don't know what the solution here is because I want to teach people these things and help them not have them hit these roadblocks in production. The next people say, where do I ask for help even? I think the vendor specific communities are really great. I'm on the Gstreamer mailing list and very helpful. If you ask things that are related specifically to Gstreamer with WebRTC, you'll get help right away. Another great one is all the specific SFUs. MediaSoup, Janice, Jitsi, their mailing lists are very healthy. There's a good chance that you'll get good help from them. The vendor agnostic communities aren't so great right now. Discuss WebRTC. You'll get a lot of questions but not a lot of answers. If you do get answers, they're usually pushing people towards a commercial product. There's FreeNode. There's the IRC channel. I think there's two or three people in there. The technical depth is really great if you have a question, but not much beyond that. One that I think is showing a lot of promise is Videodev. They have a channel dedicated just to WebRTC. It's been getting more and more activity. The Red 5 Pro people are in there. There's a lot of people that are interested in spec work and stuff like that. I do enjoy being in that channel. The best thing I've seen is Twitter. You have pretty much all of the people that are involved in WebRTC on there, but that's not friendly to someone that's getting into the community. There's no way they have to go follow all these people and figure it out. It's not tenable. The next thing people say is they want WebRTC in their language. Since the most popular implementation, Google is in C and C++. That doesn't really work for everyone. C and C++ is super powerful, but at a cost. You have to worry about the memory and you have to worry about security. Then it doesn't fit into existing code-based or build systems. A lot of times people are using a different, entirely different language. Even if they are using C and C++, they just want to use CMake. I tell them, you got to do G Client Sync and all these things. It's very frustrating. It's challenging to build. The other thing is some people just want to read code. Google's implementation really isn't too large for anyone to read and understand. You've got over, I think the last time I heard, it's definitely over a million lines of code. That's not someone, and it's something people are going to navigate and understand. I also have people come who are targeting other platforms. If you tell somebody to use a WebRTC implementation that uses OpenSSL, it's out of the question. They have to use MbedTLS or they have to use the primitives that are available with that hardware. Lots of these corner cases that we just forget about. The other one is, I see a lot of non-standard use cases. A lot of people will join the Pion Slack and they'll want to talk about Teleop. They'll want to talk about file sharing. They'll want to talk about natural for data channels. They want to do some server to server thing. There's not really home for them to talk about these things. Discuss WebRTC and other video dev. Very very focused on the SFUs and the video side of things. Another one is because people don't understand WebRTC, there's a lot of them not invented here. The two that I've seen in the last year, I interacted with someone who was working on a proprietary IceClone. They thought, Ice is too complicated. I can do so much better. They learned a lot along the way and they made a lot of mistakes. I guess there's nothing wrong with learning, but it's a shame that WebRTC is perceived as too complicated and so no one wants to use it. We've seen a lot of protocols coming up that are basically just a more simple version of WebRTC. It's RTP without SRTP or RTP but without the sender feedback and MAC and forward error correction. These are all just responses to WebRTC's complexity. On top of that, you have these other protocols that exploit the lack of information about WebRTC. They say, WebRTC is browser only. They'll say, WebRTC can't send HD using all of these things that yes are true if you're using it from Chrome or Firefox, but WebRTC is a protocol. None of these things are enforced by that protocol. What can we do about that? How can we make WebRTC easier to use? How can we dispel some of the disinformation and in general just make a better community and help people build things? I think a big part would be embracing the other WebRTC implementations out there. At this time, these are all the implementations we have out there. We've got a Python implementation. We have a pure TypeScript implementation. It's not using any C or C++ libraries. It's pure TypeScript, which I think is fantastic for safety. We've got an Erlang implementation that isn't public. We've got Pypes Java implementation. We've got a Rust implementation coming up. AWS has an embedded implementation that uses embed TLS and it's really targeted at small devices. There's a lot going on in this space right now. I think that if we embrace these, it will help bring out a lot of the variety in the space. The one thing I'd really like to see this year is interop testing tools. I'd like to see a test suite that I can run to WebRTC agents against each other and assert things like protocol features and compliance. It would be great also for people to learn WebRTC. If I can spin up this test suite and run it, I can say, oh, wow, these are all the interesting things that WebRTC supports and learn from that. It also makes it so much easier for new implementations. WeRift and WebRTC RS, they've both been reading the PyOn code base to learn from it. I think that's great, but it would be even better if they didn't have to do that at all. If all they had to do was run this test suite, assert that things worked and then move on. We need more teaching resources. I started WebRTC for the curious and it didn't really have the uptake that I would hope it would have. It was designed to be this like vendor agnostic book that I'd hope to build a community around, but didn't seem to spark much interest. I'd love to hear, especially from the RTC community, what will it take you think for it to be a useful resource to give people in your community? I don't think that each SFU should have to explain how WebRTC works to all their users. I would love that we could come together and work on this book and have something that could save us all our time. Another thing I'm excited about is the PyOn Interceptors project. We talked before, it's like this RTCP pipeline outside the peer connection. You don't even have to use WebRTC to use them. Go has an RTSP server that's also using these interceptors. I think that'll go a long way to education and just making these things more accessible. Another one is better video debugging. This one, I don't see what the path forward on this is. It sounds more like a browser specific issue, but especially protocol bridging. So many people come to me with questions about debugging H.264 in the browser. It's a frustrating thing and it's always the same. You know, I'm an SPS, PPS with an IDR, things like that. We need more supportive communities. I want to start the non-commercial meetup that maybe meets once a quarter. We can do a deep dive on one WebRTC topic. Someone can get up and just talk about congestion control or just talk about ice-free starts, things that interest them. A roadmap of a WebRTC library. So one month we can have Janus get up and just talk about what's interesting that's going on in the Janus space. And then a demo of one downstream project. There's so many interesting things happening right now with VR and Teleop. I'd love people to hear it back. And of course, the vendor agnostic community. It would be great if this was video dev that people could talk with. How do we encourage people to give back? Like WebRTC and video in generalizes problem where people drop in and ask for things but don't want to give anything back. And then individual ownership. This is something that's important to me. Like a robust community for WebRTC needs to have many owners. I would hate to see WebRTC to have a CentOS moment. A game that CentOS was just sunset by IBM. And the same thing can happen to WebRTC if we put too much faith into one commercial entity and they're not interested anymore. Like that could cause a lot of damage. And then last I want to just showcase what I believe are some of the most interesting open source projects in the space right now. In S remote, someone wired up a WebRTC agent and made it so you can actually stream the video from your Nintendo switch to a VR headset and you can play your games in VR. And so they're sending these video frames over their local network and they're dropped right into the game. I think it's super interesting that the more portable that we make WebRTC implementations, the more we can allow people to do things like this. Macarabra is a protocol bridge. So you have these RTSP cameras and the project that did this, I think it took maybe one, two weeks at most. And I think it's really great that we can bring the security, the natural and all these improvements to the IoT camera space. A lot of these cameras like right now are just running over the public internet unencrypted and by making WebRTC more accessible, we're making the keeping people's data safe. WebRTC let you play games over the internet, especially during COVID. I think a lot of people, they spent their time connecting with people with this. So a later version of this actually allowed people to play multiplayer. So you'd run an NES and we later out on a remote server and it would ship your video frames in and people could play multiplayer games. Niko gives the ability for people to watch videos together perfectly synced. So you're running a browser on a remote digital ocean or AWS VM and everyone's watching can use the browser together. So me and a friend can pull this up and watch a YouTube video together or maybe we can play a game online. The possibilities are endless and it has a chat with it. Tools like this are great. The tele-operations stuff I love as well. So one individual was able to make a thing where they can actually control a tele-drone remotely and put the web at just over a web interface because they're able to do a protocol bridging. ASCII is a really fun tech demo that shows like WebRTC isn't just a browser technology. Right here we have a command line program that's just driven via your terminal and the other side is sitting on a web browser and we're talking to each other just over through our terminals. Cloud Morph. The same developer that made Cloud Game then came out with another project where you can run Windows applications in a remote VM and then stream the video frames over the internet. I think this is great for portability especially for running these old games. So you've got to deal with either running the DOS box or you've got to set up all the infrastructure and get things working. And I have a lot of hope that this will make software that's old that doesn't run in modern environments more accessible. SSH peer-to-peer gives the ability for two hosts to communicate directly and doing the natural versal. So instead let's say you want to run things in two data centers instead you can just connect them directly to each other. And it's also super exciting on the protocol side because you could have one server running Rust, one server running Go, another server running Java and they're all talking directly to each other without having to go through Redis or some kind of other Pub sub. You'll put there's less points of failure. You'll see greater throughput. I think there's a lot of interesting things I think that and also brings like cloud independence instead of instead of having to stick your servers in one place. Snowflake is doing natural versal via data channels. So they have the browser talk to an instance via WebRTC like a Go process that's sitting in the cloud. And that's super exciting because that means that it's something that ISPs or your government can't block because you can't block data channels in WebRTC because it's used by so many other things and it's right in the browser. So you can't block them from downloading binaries either. The ubiquity of WebRTC is bringing information to people that don't have it and Web wormhole. So now instead of having to use a proprietary service to share files with each other, you can download this. You can share files with each other directly from the browser or you can download the command line client that's written in Go and you can just run this file right in your browser. And I think that's super awesome. And then you have representation in the VR space. So everyone is connected here via WebRTC and they're sending their video friends up. And you can see all these other people are in this space as well and talking to each other during COVID. I think that was great that people can still get that in-person time and it does the spatial audio and it tries to create these virtual spaces when we don't have them. Project Lightspeed is a OBS to WebRTC bridge. So you can cast your OBS in sub-second time on a self-hosted instance. It uses the FTL protocol and then bridges with WebRTC. And I think that's super powerful because instead of having to spin up the burden of setting up RTMP servers and other stuff like that, you can just have this single-click deploy docker container and get everything up and running and it runs in sub-second time. Having this self-hosted infrastructure is super important. So I hope you learned something. I hope that this would encourage people to think about how can we make things easier for those coming after us. I don't want to see people reinvent, conchess, control because what we have today isn't accessible. And then also if any of the projects that I just demoed were interesting to you, just go ahead and look up Awesome Pion. All the projects were written using the Pion Go library. And thank you very much and please, we have a Slack that is almost up to like 1500 members. We'd love for you to join and talk. The conversations just aren't about Go, but they're also about lots of other things. And always feel free to reach out to me personally. I love talking about this stuff and I'm very passionate about making it more accessible for other people. And it's great that we can open source, let's us build these things that make other people's lives better. And it's not really difficult for us. We are.
WebRTC was the technology everyone wanted to learn in 2020. With COVID and WFH new developers and companies came pouring into the scene. They had lots of problems making their vision happen. Many of them didn't even know what WebRTC was. When they figured that out they still had to make the long journey of figuring out how to build. This talk is my reflection of helping developerswith Pion WebRTC. WebRTC has so much potential. We just need to solve some technical, educational and cultural problems. Out of those experiences we started WebRTC for the Curious and tried to make Pion easier to use. I also have some future ideas that I would love help from the RTC community.
10.5446/53375 (DOI)
Hello everyone, my name is Geoffrey Huguet and I work at AdaCore. AdaCore is a company providing open source tools for the Ada language. In my team, we develop a tool for formal verification of a subset of Ada named Spark. Today, I will present why and how we added contracts to the GCC, not Ada standard libraries. The presentation is organized in four parts. First, I will give a quick overview of the Ada and Spark languages. Then I will explain which problems we can encounter when using external libraries in Spark. Next, I will talk about how we added contracts to the Ada standard libraries and which level of detail we can achieve with those. And finally, I will talk about related works on other languages or other libraries. First, I will talk about the Ada and Spark languages. Ada is a general purpose language. It's quite old. It was first released in 1983. It has a Pascal-like syntax where declarations and its instructions are separated. It's a strongly typed language and user can provide additional constraints. For example, here, small int is a type of integer range from minus 100 to 100. We can define a subtype of small int, small not, with an additional constraint of being range from 0 to 100. Ada also natively supports arrays. For example, small int a is a type of array indexed by integers from 1 to 10 and containing value of types small int. These types and constraints are associated with checks mandated by the language. If I assign a value x to y, which is of type small not, there will be a range check to ensure that x is in the correct range, here between 0 and 100. If I access a value in an array, an index check will be performed to ensure that the called index is in the bounds of the array. These checks are performed either at compile time if the values are statically known, otherwise the compiler will insert code to perform those checks at runtime. Contract-based programming is another feature of Ada. We can add contracts, for example, pre and post-condition for subprograms. Subprograms are either function, which returns value, or procedures, which work on side effects. In the example, increment is a procedure taking x as a parameter. The mode of x is in-out, which means that increment will be able to read and write to x in its body. We can also add pre and post-condition on increment, which are made of Boolean expressions. Pre-condition states that it's not possible to call increment on the last integer, and indeed we cannot increment the last integer. The post-condition states that the value of x at the end of the execution of increment will be strictly larger than the value before the call. It's also possible to define strong and weak type invariants. Strong invariants need to hold all the time, while weak invariants need to hold only at the boundary of the enclosing package of the type. This example shows a definition of a type of array sorted in the ascending order. The invariant here is strong, so it should hold all the time. It states that for every index in the array before the last one, the value stored at the index should be smaller than the value stored at the next index. All these contracts can also be executed at runtime if we specify it at compilation. The compiler will insert code to check them dynamically. The goal of Spark is to verify statically, so without running the code, that the checks that are usually performed at runtime will pass. It will prove the checks mandated by the Ada language, such as Ren check or index checks, but will also verify the contracts invariants pre-post conditions. If I declare a value of type sorted R, Spark will check that the predicate on sorted R holds for the given value. When calling a sub-programmed with a precondition, such as increment, Spark will prove that the precondition holds with the given parameter. Spark is a static analyzer, which means that it will not run the program to analyze it. Spark proceeds with deductive verification. It takes the code annotated with contracts, which will be translated to mathematical formulas. Those formulas will be given to automatic provers, which will prove that the properties are valid or not. Spark analysis is sound, which means that it will not miss any error. However, it is not complete. This means that when Spark cannot prove a check, it doesn't mean that the check will fail at runtime. Spark might just be missing information to prove it. Now, I will give more context about the problems that can happen when using unannotated libraries in Spark code. Spark analysis is modular. This means that each sub-program will be analyzed separately. To do so, when another sub-program is called, Spark will try to prove its precondition and then assume its post-condition. If the sub-program is not annotated, Spark makes several assumptions. First, it will assume that no exception will be raised and that the sub-program has no effect on global variables. It's the case here. In this example, I used a sub-program from the standard library AdaStrings and Bounded, which provides strings with variable lengths, and AdaText.io, the input-output library. When I analyze the sub-program, Spark says that there are no global contracts available. It assumes that the sub-programs used have no effect on global items. We can ask ourselves if Spark is making correct assumptions here. For the use of unbounded strings, the answer is probably yes. The sub-programs have actually no effect on global items. But for IO, this is less sure because IO procedures have an effect on the file system, for example. Thus, we need to annotate the sub-programs with correct global contracts, at least, so that Spark makes correct assumptions. First, I will talk about modeling of global effects of sub-programs. As said earlier, sub-programs from AdaStrings and Bounded have no effect on global items. We can add these contracts to the sub-programs with the aspect GlobalNull. When we do this, the warning emitted by Spark disappears. However, this is not the case for text IO. Procedures here have an effect on the file system or even on the sub-program. There is no global variable representing the file system. So how can we model these effects? One solution in Spark is to create what we call an abstract state. In fact, it's a virtual object and we decide that it represents the file system. Here, it does not need to be linked to any variable in the program. We declare in the example that the package text IO has an abstract state called file system. And after, we can use it as any other global variables in constructs. Here, procedure get has an effect in-out on the file system. This way, it's possible to model correctly even if it is not precise, the global effects of the cold sub-programs. Spark now will make correct assumptions about the global effects of sub-programs from Take IO. It's also possible to model the sub-programs from the sub-program. We can also make correct assumptions about the global effects of sub-programs from Take IO. It's also possible to add contracts to protect from runtime errors. In the Ada reference manual, which defines the GC-Signat-Ada standard library specifications, we can see that the behavior of sub-programs is fully detailed. For example, the insert function may propagate index error if the parameters are inconsistent. Here, before it needs to be in the correct range. And indeed, if we test it on this code, the parameters do not satisfy what's specified in the reference manual, and at execution, an index error is raised. But if we analyze the code we spark, it doesn't say anything. In fact, this is because Spark doesn't enter the body of insert, and thus doesn't know that it might propagate the error. We need to add a precondition to prevent calls to insert with inconsistent parameters. So first, we ensure that before is in the expected range. The second part of the precondition is about the length. We need to make sure that the length of the resulting string is not larger than the last integer. Otherwise, at runtime, there will be an overflow. If we rerun the proof, now Spark complains that the insert function has not been called with consistent parameters, which is what we want. Let's move on to a second example on the text.io library. It defines procedures opening and deleting a file, respectively open and delete. Those procedures can propagate status error, depending on the fact that the file is open or not. Fortunately, a function is open is defined later and allows us to test exactly this. It's possible to use it in the contract. So we add preconditions in the procedures open and delete. We want to open files that are not already open and we want to delete files that are open. And we can test it on actual code. The first code to delete at line 4 will propagate an error because it's called on a file that's not already open. Spark detects it and says that the precondition may fail. However, lines 5 and 6 are correct. The subprograms are called in the correct order. But Spark does not manage to prove that we used the procedures correctly. We also get messages stating that files are not initialized. In fact, Spark is missing two information here. First, we need to annotate the type so that Spark knows that by default, the file is not open. Second, we need to add postcondition to open and delete so that Spark knows the open status of the file after a call to these subprograms. So first, I add a default initial condition to the typefiled file tag. This way, Spark will know that if we don't provide a value to a variable of typefiled type at its declaration, its open will return false on this variable. We also add postconditions to open and delete to specify their action on the file. After the call to open, the file is open. After the call to delete, the file is closed. And if we run the proof, we have everything that we wanted. Spark emits check when we try to call delete on file 1, which is not open, but it's also able to prove that we used the procedures correctly in the next lines. This is not the only error that can be propagated by subprograms from AdaText.io. There are plenty of others. Mode error is related to the mode in which files are open. In file will be a red mode. Out file will be a right mode. Mode error can be propagated when we try to read characters from a file open in the right mode, for example. And we handle this error in precondition as well as status error. There are other errors, such as name error, raised when the file does not exist on the file system, or end error, raised when a file terminator is read in the procedure. Use error is related to the external environment. We could not add precondition protecting from these errors, since it was not possible to express them with Spark contracts. In certain cases, we can add complete contracts to fully detail the actions of subprograms. Let's take the example with the call to insert. I added an assertion after the call to check that str2 is equal to abcd. An assertion is just a Boolean expression that we want Spark to prove in the subprogram body. But Spark says that it cannot prove the assertion. It's missing some information. Let's have a look at the contract of insert. Indeed, we don't have any precondition on insert. Spark doesn't have any information on the return string after the call. So I'm taking the other part of the rule in the reference manual, which states that the content of the return string is the concatenation of the first before elements of source, then the elements from new item, and then the rest of the elements of source if they exist. It's also specified that the lower bound of the return string is 1. So let's add this as postcondition. First, we write that the lower bound of the return string is 1, and also that its length is the length of the source plus the length of new item. The first before element of the result are the first before element of source. The next element are those in new item. And finally, if there are remaining characters in source, they are appended to the result. And now the assertion is proved. Spark has everything it needs to prove the content of str2 after the call. Insert is one subprogram from a long list of subprograms working on fixed length strings. They are part of the library AdaStringsFixed. There are search subprograms, strings translations, transformations, selectors and constructors. Today, most of them are specified with complete contracts just as insert, which allows any user to try and be able to prove precise properties when working with strings. Finally, I want to talk about similar works that have been done for other libraries. For example, C also has a verification tool called Formacy, and its package with annotated header files for the C standard libraries. It provides contracts, sometimes incomplete, but also complete ones for certain libraries. GrammarTech has been doing some work to annotate more libraries from the C standard libraries. For Java, it's the case as well. Some of the standard libraries are annotated for OpenGML. It's one verification tool on Java code. Besides this, community can be involved in this effort as well. There's an initiative called Annotations for All. The goal is to enable anyone in the community to participate in this effort. You can check it out at the link annotationsforall.org. But standard libraries are not the only libraries that can be used in Spark code. Third-party libraries might be used as well, and users will face the same problems as with standard libraries if they are not annotated. One of the projects in Adakor was to provide a Spark binding of two cryptography libraries, Twizzult and Lipsodium. The Spark binding consists in binding the C functions from the original libraries, but taking advantage of contract-based programming in Ada to ensure the correct usage of the libraries, and also to prove user code. Another project was a Spark binding of Cyclone TCP, an implementation of a TCP IP stack in C. A similar work has been done for this library, but some parts were entirely rewritten in Spark in order to verify them, adding even more reliability to it. There are some next steps that are already planned for us. First, we want to specify more libraries from the GCC not Ada standard libraries, as it allows to make the software more reliable. Secondly, the work we did on the standard libraries can also lead to a verification of the given implementation of the library. In conclusion, I want to emphasize three main points. First, there are different levels of detail when adding contracts. The three ones I presented today were the modeling of global effects, the protection from runtime errors, and then the complete contracts. All of them help to increase the safety of software. They can serve a proof purpose because we're able to verify more properties, but they can also be seen as documentation. Indeed, we are able to know even partially which are the effects of the sub-programs just by looking at their contracts. Finally, this is a substantial effort. Contracts can be very long or difficult to express, for example in text IO. The community can also participate in this effort through initiatives like annotations for all. If you're interested in the subject, you can check out a blog post on AdaCourseBlog. It talks about the binding and annotation of two cryptography libraries, Twitsalt and Libsodium. If you're interested in trying Ada or Spark, you can check out the online Ada and Spark courses on learn.adacore.com or download the Spark toolset on the AdaCourse website. Since Spark is an open source tool, you can access its source code on the GitHub repository Spark 2014. Thank you for your attention.
The guarantees provided by SPARK, an open-source formal proof tool for Ada, and its analysis are only as strong as the properties that were initially specified. In particular, use of third-party libraries or the Ada standard libraries may weaken the analysis, if the relevant properties of the library API are not specified. We progressively added contracts to some of the GCC GNAT Ada standard libraries to enable users to prove additional properties when using them, thus increasing the safety of their programs. In this talk, I will present the different levels of insurance those contracts can provide, from preventing some run-time errors to occur, to describing entirely their action.
10.5446/53376 (DOI)
Hello everybody, my name is Claire Dros and I work at Edacorp, a company providing open source tools for the IDI long wait. In my team, we developed a tool for formal verification of a subset of data named Spark. One of the major restrictions imposed by the subset is the absence of aliasing. It simplifies verification notably as the tool can safely compute data that can be modulated by each instruction. Because of this restriction, pointers have been excluded from the Spark subset from the beginning as a major source of aliasing. Some support for pointers has been added in the recent versions of Spark using an ownership policy inspired by Rust in order to prevent aliasing. In this talk, I will present this support and try to give you a feeling of what kind of algorithms are supported in Spark, currently for pointer-based data structures and how they can be verified. This presentation will be separated in six parts. First, I will do a quick review of the IDI Spark languages. Then I will explain how we have added support into the Spark subset and which restrictions we have chosen to support pointer types in Spark. Then I will go to recursive data structures which are implemented using pointers in EIDA and I will explain whatever both of them are supported in Spark. Afterwards, we will go to algorithms on these recursive data structures and in particular we will try to understand how we can traverse them using a loop in an iterative way. This can be done using borrowing of ownership, we will see. Afterwards, we will explain how this borrowing is supported in proof from what we call predicted value. And finally, we will explain how we can annotate this loop doing traversal of recursive data structures using a new kind of annotations named pledges. So first, a quick overview of the IDI Spark languages. IDI is a general purpose languages. It is rather old. Its first release was in 83. It has a Pascal-like syntax where declarations are separated from instructions. If you want to declare something inside a list of instructions, you need to use a declare block which is an instruction. You have one here. It has a part before the begin which consists of declarations. Here we declare a value y and then you have instructions after the begin which are in the scope of y. After the end, y goes out of scope. IDI is strongly typed. You can declare new typed that you can name. This type can have additional constraints. For example, here we have a type small int which is an integer type. It ranges from minus 100 to 100. I can then declare a subtype of small int called small not which additional range constraint ranging from 0 to 100. IDI also supports natively arrays. We have an example here. The type small int arranges of integers from 1 to 10 and contains the value of type small int. These type and these constraints are associated with checks which are mandated by the language. For example, if I store a value in a variable of type small not, I have a check, round check, which makes sure that the variable is indeed in the range of the type's constraint. If I access a value inside an array, I will have an index check to make sure that the index is indeed in the bounds of the array. These checks can be performed at compile time by the compiler if everything is statically known. Otherwise, the compiler will introduce code so that these checks are performed dynamically at runtime. In this case, if the check fails, there will be an exception which will be raised. It is also possible to do contract-based programming in AIDA. You can write contracts in particular on subprograms. Some programs in AIDA are either functions or procedures. Functions will return the result. Procedure will work by side effect. Here we have an example. We have a procedure increment, which parameter x of type integer. This parameter has mode in out, which means that I can modify it in the body of increment. I can put a contract on this program. It is made of two Boolean expressions, a precondition and a postcondition. The precondition says that I can only call increment on the value x, which is strictly smaller than the last integer. It is normal. If I want to increment on integer, it should not be the last integer. Then I have a postcondition saying that after the execution of increment, the new value of x will be bigger than the value of x at the beginning of increment. This value is returned x to old. Decode here is called an attribute in AIDA. It is also possible to put contracts on type. They are weak and strong type invariance. Strong type invariance must always hold. We can type invariance must only hold on the boundary of the enclosing package, including the type. Here we have an example. We have type sortedR, which contains only arrays which are sorted in ascending order. The predicate on it is a strong type invariance, which must always hold. Instead, for all values in the index range of the array, say for the last, the value must be smaller than the value as the next index. So, sorted. These contracts are associated with checks, but only if the code is compiled with assertions enabled. In this case, there would be an exception if one of the contracts failed at runtime. The AID, BNPARK, is to formally verify, statically verify, all these checks which are normally performed dynamically. So, the AID is to verify runtime errors like round checks or index checks that we have seen before, and also all the contracts. So, if I declare a value of type sortedR, I will have a check to make sure that the array is indeed sorted, and this will be verified by Spark statically without running the tool. In the same way, if I call increment, I will have a check to make sure that the precondition will necessarily hold. So, Spark performs static verification, which means that the program is not run. The next thing is done by an analysis prior to running the program. It uses what is called deductive verification. It is a method which takes Spark code annotated with contracts, by a postcondition in particular, and then it translates that as a set of mathematical formulas. These mathematical formulas are then handed to automatic SMT solvers, which will decide if they are valid or not. If all the formulas are valid, then this means that the Spark code and the contracts were correct. Spark is a sound tool in the sense that if it can prove a program, then the program is correct. But it is not a complete tool, which means that if it cannot prove a program, it doesn't mean that the program necessarily has a bug. Generally, this kind of tool is either sound or complete because of the undecidability of the underlying logic. Now, let's speak about pointers. As I have said before, pointers have been forbidden in Spark since the beginning, because in particular, they were introducing aliasing between different data structures, but also because they have wide range of specific errors associated to them, for example, double free or dangling memory or things like that. The fact that pointers were not allowed in Spark was not that big a deal as one might think, in particular if we are used to see the pinnacle C, because in AIDA, pointers are less necessary than in C. In particular, arrays are natively supported, so you don't need pointers to do arrays. And also, as you have seen, there are parameters modes which can be used to say that some parameters can be modified by the program, so you don't need a pointer to have a reference, for example, an integer. Still, pointers are useful in AIDA for specific things, in particular to define recursive data structures, and this is what we will be talking about later in this presentation. So first, let's just look at what pointers look like in AIDA. In AIDA, pointers are called access types. We can define an access type as is done in the slide. The type int access is defined to be an access type designated integer. If I want to de-reference an object of this type, I will use the notation dot all. So for example here, I have an assertion setting that the value reference by x is 10. In AIDA, you can allocate memory on the heap using the new keyboard. You will give the value that you are using to initialize the data cell inside the basis called a qualified expression, which is an expression which gives the type and the value that you see here. And since AIDA doesn't have a garbage collector, you need to de-allocate the memory manually. If you want to avoid memorandum. To support pointers in Spark, we want to support them in a safe way, so we have introduced some restrictions on them. So there are two of them. The first one is that we only support designated data which is allocated on the heap. So not the stack, not globally. In AIDA, there are two kinds of pointers. The first kind is named pull-specific access type, the type in the access that we have seen before. This is pull-specific access type. And then the other type, which is generalized access type, has a keyboard hold. This one is allowed to decider the value everywhere, in particular on the stack. For example, if here I have a variable of type integer named y, I can get pointers to this variable using the tick access notation. Both the declarations of types of generalized access types and the tick access attributes are not allowed in Spark. You can only designate values which are located on the heap. This has some advantages. In particular, it avoids the problem of dangling references when some variables on the stack get out of scope. It also avoids the question of whether the variable object is allocated on the stack or on the heap when you want to be allocated. And also it avoids the possibility of aliasing between an object declared on the stack and a pointer. The other restriction that we impose in Spark is another ship model which is used to avoid aliasing between values defined on the heap. So let's see a bit more about that. The idea is that when you compile a pointer, when you do an assignment with a pointer, you will create, in fact, an alias. So for example, if I have a value x, if you net in the value 1, then I do an assignment of x into y. Then x and y are alias of the same value. So if I modify, for example, y, the value designated by y, then I also modify silently the value designated by x. So we don't want that. To avoid that, we have a notion of ownership which is associated to a pointer. If the pointer has ownership of the value it designates, you can modify it. If it doesn't, you cannot. And the ownership is moved from a pointer to the other on assignment. So for example, if I have the pointer x which designates the value 1, when I initialize it, it has the ownership to the value it designates. Then I assign x into y and I move the ownership. It is transferred from x to y. So afterward, y has the ownership of the cell. So I can read it and modify it through y. But it is not allowed anymore to read it or modify it through x. In practice, it means that it is still possible to have alias in a Spark program, but you will not be able to see them because you cannot no longer read x after you have moved the ownership to y. So if you modify some things through y, you will not be able to see that the value which is designated by x is also modified. Thanks to these restrictions, we can support access types in proof in a relatively simple way. We can simply ignore all the alias. We say that all structures you are having pointers are in fact normal structures with no alias. They are distinct. And then the only thing that we have to worry about is whether they are null or not. So in fact, they are under the bit like what we call in functional languages, option types or maybe types. We say that a pointer either it is null or if it is not null, it has a value and we know the value. So for example, here we have a pointer x which is designated the value 10. So we know that it is not null. And when we do the dereference, we can check that it is not the dereference of the new null pointer. Note that this translation is only valid because of the ownership rules that we have seen before. Indeed, if I have x and y which are i-asys of each other and I modify the values through y, then if I was able to assert that the value designated by x was 10, I would be able to prove that because as I said before, we do not consider the aliases for the analysis, we consider that there are no alias. All the data structures are independent or standalone. So fortunately, this is not allowed by the ownership rule, which is why we can use such a simple encoding. So now let us go to recursive data structures. In AIDA, it is necessary to use a pointer type, an access type to construct a recursive data type. The idea is that first you declare the type without giving any definition. So it is called an incomplete declaration. For example, here I have an incomplete declaration of the type this cell. This incomplete declaration, I cannot do much with the type declared like that. I cannot, for example, put it as a parameter of function or something like that, but I can still create an access type to this type. So here I can create the access type list which designates the value of the type this cell. Then I can create a completion of the type this cell and in this completion, I can use the type list doing the recursivity. So here this cell has a field data which is an integer and a field next with the rest of the list. Of course, it is possible to create more complicated recursive data type by interleaving mutually recursive definitions. For example, I could put in the middle here an array type which contains value of type list and then I could have an array tree, for example. In Spark, there are no specific restrictions regarding which data structure with recursive data type you are declaring. All recursive data types are okay in Spark. But if you want to construct the structure in Spark, then you are restricted by the ownership policy. As we said, we do not support IAS in Spark, so we will not be able to support data structures which have sick cycles in them. So this means that simply like list and trees are okay, but do we like list or dogs are not. Now let us go to algorithms over these data structures. Most algorithms over recursive data structure will involve a traversal. In general, it involves a traversal. There are two ways to traverse the recursive data structure. We can do this either iteratively using a loop or recursively with recursive calls. If we design the algorithm in a recursive way, the ownership is not causing an interval. For example, here I have two functions which are defined in a recursive way. The function length will compute the number of elements in a list in a recursive way if the list is null then it returns zero. Otherwise, it returns one plus the length of the rest of the list. The function in TH is similar. It takes a position i as a positive integer and a list x and it returns the data field of the element of position i in x. So if i is one, it will return x.data and otherwise it will search for the element at position i-1 in x.next. So these two definitions are perfectly acceptable in Spark and the tool can be used for example to check that the precognition of NTH setting that i must be smaller than the length of x is enough to make sure that inside the body of NTH we never dereference the new pointer. As Ida is an imperative language, we would also like to be able to traverse recursive data structures using a loop. Unfortunately, this is a bit more complicated with ownership. So here I have an example of a procedure set all to zero which takes as a parameter list x and which sets all of its elements to zero. This procedure uses a loop which sets all the data fields of each element to zero. So let's look at what happens with the ownership policy of Spark. When we create the value y which is due to traverse the list, we will transfer the ownership of the list designated by x to y. Then when we inside the loop we do y is y.txt we go to the next element of the list. The ownership of the first cell of y is completely lost. Indeed, it is not accessible through y anymore and it's not accessible through x either since x doesn't have the ownership. In fact, you will have an error here if you run the Spark tool because it will say that there is a memory leak. The memory of the first cell is lost. Then when you reach the end of set all to zero, then you will not be able to restore the ownership to x since you don't have it anymore. So how can we support this kind of algorithm? What we need here in fact is a way to transfer the ownership of the list x to the value y but only for the duration of the sub-program that we wanted to return automatically to x. This notion exists in Rust, it is called Borrowing. And in Spark we have borrowed the notion from Rust. We can in Spark say that an object that we declare is not a new structure but only a borrower of the structure. It is not a move, it is a borrow. For that we must use a type which is named an anonymous access type. So it means that it is not a named access type like the type list that we have before. It is an anonymous access type so it has no name, it is just access list cell. If we use the type it means that the ownership is just transferred temporarily from x to y for the scope of y. So inside the scope of y, so here inside the begin part of the declare block that we have here, the ownership is transferred to y. So it is a bit like if x was moved into y, you are able to access and modify the data structure for y but you are not allowed to either access it or modify it through x anymore. But when we reach the end of the declare block, then y goes out of scope and automatically the ownership returns to x. So we can again access and modify the data structure through x. This is exactly what we want for our loop that we have seen before. When you have a borrower, it is possible to modify it so that it designates another part of the data structure that it uses to designate a deeper part. So for example here I have like before borrower y which borrows the ownership of list x. When I do y the data is 0, I will modify not the under y but the underlying data structure x. When I do y is y.next.next, here I don't modify the underlying structure x, I modify the handle, the borrower itself, so that it borrows something deeper in the data structure. This is called a reborrow. Using reborrow, it is possible to modify our procedure set all to 0 so that it is valid spark. For that, I only need to modify the type of y so that it is anonymous access type. Then I know that the ownership of x will be transferred to y but only temporarily for the scope of y which is the body of set all to 0. Inside the loop, when I do y.data is 0, I will modify the underlying structure. And like before, when I do y is y.next, I will do what we call a reborrow so that it will change the handle so that it designates something deeper in x. When I reach the end of set all to 0, the ownership returns automatically to x which is what I want here. So thanks to this notion of borrowing, it is possible in Spark to do simple traversal of recursive data structures. More complex traversals, in particular involving a stack, are not possible in Spark because of the anti-alasing rules. Now we have seen that to be able to traverse data structures using a loop, we have introduced a notion of borrower which in fact are basically aliases, which can be used to modify a data structure, are built really deeply and at a place which is not statically known because of reborrow. So how do we support that in proof when we have said that we don't support aliases in proof? So we have said before, in proof, we don't consider pointer to be anything special, we don't consider possible aliases between different data structures. So when we encounter the declaration of y, we will create a standalone data structure which has no particular link to the data structure x except that they have the same value at the point where we do the borrow. Then when I am inside the scope of y, I can modify this standalone value without a problem. So first y is modified to only designate the last cell of the list and then this cell is updated to designate the value of 0. When I am in the scope of this borrower, the fact that I modify x when I modify y is not really a problem because I don't see it like before because of the ownership rule, I don't see it. So I don't really care that when I modified y.data, I was also modifying x. But when I go, I reach the end of the scope of y, here I have a problem because now I am allowed to access x again. So I need to know what this value is and I know that this value might have been modified during the borrower. So the question is, what do I do? The idea is that along with this standalone value for the borrower, we will also create additional value with the standalone value of the borrower and the borrowed object, but not now, the values that they will have at the end of the borrower. Those are called predicted values. Of course, when we create these objects at the time of the borrower, we don't know the values that they will have at the end of the borrower. But it's not a problem because we are doing a formal analysis and not execution or compilation. So we don't need to know the value to say that there exists. So we only say that we have two objects, y at hand and x at hand at hand here, and that we don't know anything about their values except that they are equal. Because since we know that x and y are in fact aliases, we necessarily know that at the end of the borrower, x and y will have the same value. When we reach the end of the borrower, we use these predicted values to reconstruct the value of x after the end of the borrower. The idea is that when you reach the end of the borrower, you will know the new value of y at the end of the borrower. For example, here in our small example, I have set y.data to 0. So I know that at the end of the borrower, the value of y is 0, then 2, then 3. Since I also know, thanks to our relation that we have written about the value of y and x at the end of the borrower, that x is equal to y at the end of the borrower. So I know that x at the end of the borrower is 0, 2 and 3. This is a bit more complicated when we do reboroughs. So here I have the example that I had before, where I have the value y which is the borrower of x and then I do reborough, think that y is y.next.next, and then I update the data field of y to 0. So here when I do y is y.next, when I do the reborough, I will update the value of the standalone object for y, so that it designates only the values for it like before. But I also need to update the relation between the value of x at the end of the borrower and the value of y at the end of the borrower. Because of the reborough, the value of y at the end of the borrower will not be the value of the structure that was designated by y at the beginning of the borrower. It is a value now which is only the third element of the list. So now I know that the value of the end of the borrower of x will be two elements and then the value of y at the end of the borrower. What's more, I know the exact value of these two elements that are at the beginning of x. Indeed, these elements are what we call frozen by the borrower. This means that they cannot no longer be updated during the scope of y. Indeed, they cannot be updated through y since y doesn't have access to them anymore and they cannot be updated through x since x doesn't have ownership for the direction of the borrower. So we know that at the end of the borrower, x will be a structure set in by the value 1 and then the value 2 and then values that are in the value of y at the end of the borrower. So following the same reasoning as before, I can update the current value of y by when I encounter the value y.data is 0. And then when I read the end of the borrower with the value of y at the end of the borrower that I have computed and the information, the relations that I had before, I can deduce that x is a list with value 1, 2 and 0. And so I can prove the assertion, say, x.next.data is 0 at the end of the borrower. So we have seen that we can support pointers in Spark, basically by avoiding aliasing using an ownership policy. We have seen that data structures defined in AIDA through pointers are supported in Spark 2. To support traversal using a loop on this structure, we have introduced a notion of borrower, which are basically aliases. We have seen how they can be handled for the proof. Now we will speak about how we can annotate the programs which are using this kind of technique which are using borrower like our program set all to 0 that we have seen before. First, let's justify why we need annotations at all. The small programs that we have seen until now are proof without any kind of additional annotation for the tool. However, here we want to verify algorithms which are traversing recursive data structures using a loop. And who says loops and deductive verification says loop invariance. For those who are not familiar with deductive verification, it is a process which works by traversing the control flow graph and constructing a set of logical formulas which are equivalent to the set of the program on a step-by-step way. So of course, if you are traversing the control flow graph and constructing something on a step-by-step way, you will have trouble when you encounter a cycle in the control flow graph. The solution that is usually used by the deductive verification tool is to add the user to provide additional information through a loop invariance. This loop invariant should summarize the information that is known in the loop. It is used as a endpoint by the tool to flatten the control flow graph and so that to have only acetic passes in the program. Writing the loop invariant is generally considered cumbersome by users, but unfortunately, generating them automatically is still a research topic. So the user will need to supply the invariant and it needs to supply it in a careful way because as in this invariant is used as a code point, if you are not precise enough in what you say, you will not be able to prove much about your program. So basically, you must describe everything which has been done by the previous iterations of your loop. So here I have an example. It is a procedure name also set all to 0, but it is simplified with respect to the previous one because x is not a list, it is simply an array. So the loop which is going over the array x is just a for loop over the indexes of the array which go from 1 to 10. I have put an invariant in this loop. It says that in the previous iterations of the loop, I have set the value to 0. It has an universally quantified formula which says that all elements of the array up until the element at position i are equal to 0. Now let's go back to a more complex example with the list. I have a loop, so I need to put an invariant. What do I want to write in my invariant? I want to say that, well, basically I have a certain number of zeros as many as I have had loop iterations until now and afterwards I have something which is an alias of the value y. Well, this is not something that I will be able to say normally in a loop invariant because I am not allowed to speak about the value of x inside the scope of its borrow or y. So I need some kind of special annotation to be able to describe something like that. A researcher team at ThGerec who are working on the verification of REST program using the PROSTY tool have come up with an option of pages which are specific annotations exactly for this kind of case. The idea is that these annotations are used to speak about the current valor of the borrowed object with the borrow the borrower and the borrowers at the end of the borrow on these special annotations. In Spark, we don't have special annotations. We just call pledge standard assertion or standard loop invariant or standard contract, which speaks about the value of a borrowed object or a borrower at the end of the borrower. This can be done by a special functions called add and borrow function, like this add and function that we use here. In this example, we have like before a borrower y, which borrows the ownership of a list. X and we say in an assertion that the value of X at the end, the data field of the value of X at the end, will be equal to the data field of the value of y at the end of the borrower. This is true no matter what happens in the scope of the borrower since X and Y are in fact aliases. Now, let us look at what we can prove about these pages. When we prove something about pledges, we never do a look ahead to know what will happen later in the program even if the pledges are speaking about values at the end of the scope of the borrower. This means that we will prove things about value at the end of the borrower, but only based on what we know already for sure now, where we are in the subprogram about these values. Here is an example. I have a list the same as before with value 1, 2, and 3. Then I do a borrow of this X into Y, and then I do a reborrow saying that now Y is X.NEXT.NEXT. At this point, I can say in a pledge that the value of X.DATA at the end will be Y, and the value of X.NEXT.DATA will be 2. This is provable because indeed, we know at this point, since Y only designates Y.NEXT.NEXT, that the value of X.DATA as an X.NEXT.DATA will necessarily be the values that they have at this point, which is 1 and 2. In the next assertion, I say that the value of X.NEXT.NEXT.DATA will be equal to the value of Y.DATA at the end of the borrower. This is true too, because X.NEXT.DATA is an alias of Y.DATA. In the last assertion, I say that X.NEXT.NEXT.DATA is 3. At this point, it is true, but we don't know if it will still be the case at the end of the borrower, because since Y still has access to this cell, to X.NEXT.NEXT.DATA, it is impossible that I will modify it in the rest of the borrower. So this cannot be proved. This cannot be proved no matter what happens after a while in the program, since I don't do any look ahead. I can only prove here something which I already know about the value at the end of the borrower. You will note that this closely resembles our notion of predicted values that we use for proof. So in fact, to prove pledges, we simply translate them as an annotation of the predicted value. So here, for example, if I represent my predicted value like before, I see that I can prove that the value at the end of X.DATA will be 1, the value at the end of X.NEXT.DATA will be 2, and that I will not be able to prove that the value at the end of X.NEXT.NEXT.DATA is 3, since I don't know the value at this point. Using this notion of pledges, I want to now prove my set all to 0 procedure. Please don't look at the loop invariant right now. We will go over them together. So what do I want to state in my invariant? I know that I want to speak about the value of X, which is borrowed, and to be able to do that, I need to use the pledges that we have just seen. These pledges allow to speak about the value of X, but not now. They can be used to speak about the value of X, but at the end of the borrower. And to be able to prove it, we must only put in there some information that we already have about the value at the end of the borrower. So what do we want to say? Well, at given iteration in the loop, what do we know about the value at the end of the borrower? We know that what we have already modified and frozen, which is that X will have as many 0 as there will have been iteration until now. Because at each iteration, we add a 0 which is frozen. And then, afterward, it is an alias of Y. This is what is written in the small picture. So to state that, we first introduce a global variable to count the number of iterations that we have already done. It's named C. This variable, we say that it is ghost, because it is only used inside annotations. When we mark a variable as ghost in Spark, the compiler will know that when the compilation is done without assertions, then it can remove completely the variable from the code. So this variable is first initialized to 0, and then in each iteration, it is incremented. The first loop invariant is a regular loop invariant. It speaks about the value of C. It says that C is the number of elements which have already been traversed. So it says that it is the length of the list Y at the beginning of the loop, which is accessed through the tick loop entry attribute, similar to tick hold, minus the length of the list Y now to the number of elements that we have already seen. Then the rest of the loop invariants are the pledges. They describe the picture which is at the bottom of the slide. The first one states that the length of X at the end of the borrow will be C, plus the length of Y at the end of the borrow. The second speaks about the the elements using NTH. It says that for elements which are before C, the value of X at this position will be 0. And for positions which are after C, the value will be the corresponding value in Y, which is exactly what is written in the picture. Thanks to these invariants, we are able to prove this program in particular, to prove what we would like to prove on such a procedure, which is that the list X at the end of the call will have the same length as the list X at the beginning and that all its elements will be equal to 0. So to conclude, we have seen that pointers are now supported in the most recent releases of Spark. They are supporting using a non-shap policy, which is used to enforce absence of IASs. Thanks to this support, we can in particular implement recursive data structures in Spark, as long as they do not have cycles. If we want to do traversals on this structure, we can do recursive traversals, but if we want to do traversals using a loop, we need the notion of borrowing, which allows to only temporarily transfer the ownership of a data structure to a borrower. This borrowing, which is basically a kind of IAS, are supported in Spark through predicted values, which are the value of the borrower from the borrowed object, but at the end of the borrower. Then to be able to verify this procedure used to traverse recursive data structures, we need to handle loops and so to provide loop invariants. So we have introduced a notion of pledges, which are annotations for borrows. Thanks to all that, we can now verify programs in Spark involving recursive data structures. As examples, I have, for example, verified the insertion into a binary search tree in Spark, in an imperative way. I could also implement an insertion in a red-black tree, which is balanced, but for this one, I had to use a recursive algorithm, because an imperative one, an interactive one, will need a parent field to be able to go up, back up in the data structure, and the parent field means the cycle, so it is not Spark friendly. If you are interested, you may look at the block posts that have been written on the subject that you can find on the IdaCore block. You can also look at an article that we have published recently at the CAF conference. It is the implementation of the support of pledges and borrower's is a bit different, because it was an old implementation. This one is the newest one. If you're interested in Ada and Spark, you can get some courses on the Learn website of AdaCore. You can also download the Spark 2 set in a freeway on the AdaCore website. And since Spark is an open source tool, you can check out its sources on the GitHub repository. Thanks for your attention. Hello? I don't know if you can hear me. Hello, everybody. Is it working? Ah, yes. I can hear you. So, I don't know if there are questions. So, there are several questions. The first one is about positive assertions and negative assertions. I'm not sure. I understand it completely. If it is a question about decidability and what is verified by Spark, the question, when you verify a program in a Spark, if Spark verifies the program, then you are sure that it is correct, which means that all the defects which are detected by Spark, like runtime exceptions and failures of assertions and contracts, you are sure that they will not occur. What you are not sure is if the tool doesn't verify the program, it says that there is a defect which can occur. You are not sure that it can indeed occur. Possibly, you are either just missing some annotations and contracts or maybe just the solvers are not strong enough. It is too complex for them to verify. So, if your program is verified, you know that it is okay. If you are still not verified, you are not sure that you have a bug. There is a question about memory leaks. Spark uses the ownership model to track, to prevent aliasing. We can also use it to detect memory leaks. If you run Spark on your program, it will tell you if you have a memory leak. But it is not used. The analysis done by Spark is not used by the compiler to deallocate automatically values which are not freed. The analysis of Spark is independent from what is done with the compiler. Currently, there are no ownership pointers in the Ada language. If some are added at some point, then probably there will be a notion of automatic deallocation when value goes out of the scope. There is another question about anonymous access type as program parameters. This is handled by Spark. If the access is mutable, it will be a borrow. If the access is constant, then it will be with the code observed. It is allowed to have multiple observers as opposed to only one borrower at a single time. It is supported. The next question is about the possibility to use anonymous access type to underread and not modify the underlying data structure. Yes, it is possible. It is called an observed. When you do that, you are still allowed to refer to the underlying structure which was observed, but only to read it, not to update it. You can have multiple observers for the same data structure. When all observers go out of scope, then the ownership returns completely to the initial data structure and you can modify it again. Then there is a question about how fast is proof and borrower checking. The borrower checking is fast. No problem. This one is an easy analysis which is not done through provers. Which means that it is a bit dumb. There is nothing which is value dependent in the borrower checking. But it is fast. For the proofs, they are both through provers. Sometimes they take a lot of time. Sometimes you will reach a timeout and not be able to prove something which is true, but because it is too long. You will have to help the prover to add assertions to put your functions into other functions. It is a bit like what you do normally for proofs in deductive verification. Nothing special about pointers. I don't know if you have other questions. Otherwise, everything was perfectly clear or I have lost everybody. The question is what is the difference of semantic between anonymous access type and name access type? There is no need for the difference of semantic. In fact, it is rather that we use the different notation which are in AIDA to represent different concepts in Spark. We needed a notion in Spark of borrower as opposed to a move. We used the already existing notion of anonymous access type to represent a borrower. It had some advantages. In particular, it is possible to convert from a name access type to an anonymous access type, but not the other borrower. We don't want to be able to reassign it into something else. It was rather that these types existed in AIDA and it was practical to reuse them for different notions of proof. But possibly at some point, we may encounter an issue with this handling and then we would change and add an annotation maybe with an attribute or an aspect to say that a type is a borrower instead of a standalone object. Another question, it is still possible to have storage errors. Spark doesn't verify storage errors at all. It doesn't verify it for the stack. It doesn't verify the absence of stack overflow. It is the same with the heap. It doesn't verify that you still have memory in the heap when you allocate. It is something that should be verified by other means. Another question, will storage pools be supported in Spark? For now, we have no plan to support them. I suppose it could be done, at least for some usage, if we had interest from people. Usually, we have two reasons to add new things in Spark. Either we think ourselves that it would be a good idea and it would make the language easier to use or we have some users either paying customers or not paying customers asking for the announcement. Here, we have not had anybody asking for the announcement and we have not found the interest ourselves. For now, it is not planned. Is it possible to store a borrower in another data structure? The answer is no. You cannot do that. You can just use them on the stack. Last question, Spark only does this program by this program analysis. It doesn't do world program analysis. This means that you have to add contracts to all your software programs to verify them. Don't hesitate to tell me if some answer is not complete for a question. Thank you. Thank you.
SPARK is an open-source tool for formal verification of the Ada. language. Last year, support for pointers, aka access types, was added to SPARK. It works by enforcing an ownership policy somewhat similar to the one used in Rust. It ensures in particular that there is only one owner of a given data at all time, which can be used to modify it. One of the most complex parts for verification is the notion of borrowing. It allows to transfer the ownership of a part of a data-structure, but only for a limited time. Afterward ownership returns to the initial owner. In this talk, I will explain how this can be achieved and, in particular, how we can describe in the specification the relation between the borrower and the borrowed object at all times.
10.5446/53379 (DOI)
Hello everybody. Today, my name is Shwakan. I'm going to talk about advancing open source in safety critical systems. Thanks for joining me today. And let's go ahead and start looking at what we are going to talk about today. Linux and safety critical systems. What does that mean? Assessing in, let's look at what does it take to assess safety first in a system. Assessing whether a system is safe or not requires understanding the system sufficiently. You have to have a good understanding of what's happening in the system in terms of the interactions between different modules, components of that system. And if you are using Linux in that system, then you have to understand how Linux interacts with different pieces in your system. That the hardware components and the kernel itself, various kernel modules, and then the user space running on top of that top of Linux. So you have to all of these have to come together and you have to understand how these different modules are interacting inside. In what is, what are we doing? What are the challenges involved in running Linux or building products, safety critical systems based on Linux? What does that mean? So we have to select Linux components and features that can be evaluated for safety because you have to have a good understanding of the system itself. And then we have to identify gaps that exist where more work needed to be done. We have to do more work to evaluate systems safety of those systems. So let's take a little bit about, let's talk a little bit about Alisa and what we are doing. This is a challenge we have taken on in the Alisa project to make it easier for companies to build and certify Linux based to safety critical applications. So what are we doing? Our mission is to define and maintain a common set of elements and processes and tools that can be incorporated into specific Linux based to safety critical systems. Linux as you know is an open source project. The project has their own processes and the way the development happens, patch reviews happen and then the content accepted into those, into Linux releases. So we have to look at, we have to, that is a part that doesn't change. We have a, that part we have to take, look at those individual development processes and see how we can do safety analysis on those processes and decide how we can come up with a safety model for those. So another thing we are also, we are also trying to understand the limits. Understanding the limits is another component of the whole process. We cannot engineer your system to be safe. We want to ensure, we have to understand and ensure how to apply the described process and methods to a safety critical system. We cannot create an out of free Linux kernel for safety critical applications because of the way Linux kernel development works. The Linux kernel development continues to make progress. You have a new release coming up over the three, roughly three months and then we are used as a result where it's continuously moving forward and we have to, we cannot relieve you from the responsibilities and legal obligations and liabilities. So what we have to do is we have to provide a path forward for forward and peers to collaborate to be able to use Linux in safety critical systems. So a bit of overview of project overview. We have a technical mailing list you can engage with and we have various working groups. I will talk more about these working groups in detail, a little bit later on, but this is a quick snapshot of the working groups we have. We have an automotive working group and we have medical devices working group and we have a safety architecture subgroup and also kernel development process subgroup. So we are focusing on various aspects of putting a safety critical system based on Linux using these, with these work groups and different work groups focus is automotive and medical focus on use cases and then the rest of them bring together the rest of the story. So let's go talk a little bit about technical strategy. What we are looking to do in Elisa is develop an example qualitative analysis for automotive and medical device use cases. We want to take an automotive use case and a medical use case and then keep Linux kernel as the focus and we provide the resources for system integrators to apply and use to analyze their systems. So we have to do it's twofold, whether it's qualitative and quantitative. As in Elisa what we are focusing on is we are using common and weakness enumerations to identify as a base for us to identify hazards for the two use cases and then all of these data will be available for system integrators to use Elisa to analyze their own systems. To context here is as Elisa we do not know the full picture of the systems themselves. So what we are trying to do is take Linux and provide a safety enough resources and analysis on these two use cases as examples for system integrators to use to do analysis on their own systems. Let's talk a little bit about how we are doing this. We have automotive workgroup which is, we will see a little bit about the use case later on, but we have a use case from the automotive workgroup and then we have a use case from medical workgroup. Both of those use cases then gets into the development focused workgroups in Elisa which is safety architecture workgroup is looking at taking these use cases and say what are all the different components that could, would make up, maybe watchdog or in your memory subsystem and various piece parts that make up the safety critical system. And then we in the Cardinal, all these working groups are, we have experts from various different areas of Cardinal safety experts and then we also have industry members that are looking to build safety critical systems. So we have participation from experts from all of these areas. We are interacting, collaborating to come up with our deliverables we just talked about in the previous slide. So automotive workgroup for example, they are working on putting together use case that gets into the rest of the workgroups in this bubble here, safety work architecture working group and the Cardinal development working group and tools and investigation. So we look at that use case and we develop based on that we identify for that specific use case we identify different Cardinal modules that are necessary and then we look at those Cardinal modules and then identify hazards and make a safe argument for those. So next I'm going to go moving on to next slide to talk more about our strategy. We are continuously refining our strategy based on new use cases that we look at and it's a continuous improvement of taking feedback from different working groups and then refining our strategy. We are refining to explore, first of all we have to first identify the hazards and how do we represent those hazards in a way that makes sense to system integrators. For example, we might talk about a use after free but what does that mean to as a system integrators? How does it impact a common weakness that says use after free? What does it mean to the impact on the system itself? So those are the kinds of things that we are trying to take the hazard and define a system state that you could get into because of that hazard and how do we avoid that? Looking at the Cardinal what kind of options, features and configurations we have in the Cardinal to be able to different detect and mitigate and find a way to mitigate that hazard. And we are also looking to see what ELISA isn't as well. We are going to clearly define that. We are not trying to certify. We are not trying to provide a new Cardinal distro. It is really providing resources for two system integrators to build their safety systems. We are also looking to once we have a technical strategy which we are going to execute that strategy in defining the hazards, make recommendations on which configurations would work very well for your safety critical system and provide resources on how to enable them, what are the best options. Say if you have a Cardinal configuration that has debug options, do you want to use the debug option? Can you put it down in terms of which level of debug option you would want to run on your system and so on. So we are going to provide those kinds of resources. And how do you validate them? What kind of testing? Tests make sense for that particular Cardinal module and so on. And then after coming up with all of these deliverables, we are publication. How do we make available to you to the system integrators? We are working on also figuring out best ways to make these available for system integrators to use. We currently we are working on our scope is automotive and medical use cases. Other use cases are welcome. So let's talk a bit about what we are doing in the medical devices working group. We are using open APS analysis and we are looking to we reviewed analysis and then with the STPA experts. And we will present our results of our analysis to open APS community for feedback and we publish open APS STPA analysis on GitHub and put it under version control. So that's what we are planning to do for the medical devices working group. And there will be use case coming out of this working group that would help guide other working groups focusing on this particular use case and developing a safety analysis model. So here is a detailed view of open APS safety analysis. I won't comment on that I won't go too much into detail into detail here. This is for you to look at. So the next working group, automated working group, automated working group is working collaborating with AGL. And we are this working group developing an automotive use case and take that tell tail use case and then consolidate. We are also working on the consolidation of the concept and demo application. And then we and in this working group, we are refining the architecture we do the architectural refinement which will feed into rest of the work groups for this use case definition. So here is a example tell tail display and monitoring. This is the application use case safety clinical application, fair down application that we are using. And to be able to do analysis on this. So we take all of these use cases and then the safety architecture working group is the one that is going to look at these use cases. They focus on different aspects of the use case safety architecture working group. We look at this use cases and come up with a complete definition of top level safety requirements for the Cardinal because we have to identify for this particular use case what the Cardinal what the modules in the Cardinal and configurations that would make sense for that use case. So we do that and we start safety analysis for the tell tail safety application we talked about in the automotive. Currently the focus in this work group is automotive. So you will see some of the tell tail safety application and then coming up with a Cardinal configuration that would serve that use case well. We consolidate and refine the qualification methodology using the tell tail use case as a driver. And then start freedom from interference considering also the non safety critical parts of the Cardinal as well. Non safety critical parts of the Cardinal as well as the user space. You have in this in a safety critical system you have a safety parts safety critical resources and then also non safety critical resources. So for example, if you are thinking about an autumn up a car, you will be looking at safety criticals once would be that will drive the drive train, the automotive, the actual door shifting kind of things and then power management. Whereas if you're thinking about non safety critical things, it could be in front of it meant applications. So we are kind of coming up with a separation of those of partitioning in a way saying these are the list of safety critical areas and then non safety critical areas. And then what would make sense. And the expanded the plan is to expand the focus to medical working group use cases. And all of this activity is happening in parallel right now the focus is automotive. So kind of development working group. So once in the current development working group, what we are doing process working group what we're doing is we're assessing Linux kernel management process. When you are thinking about safety, safety critical applications, the way safety analysis has to be done. In some ways, it is, we'll have to look at the kernel management process itself, and then look at the development process closely, and look at see where we can derive evidence for safety. Like for example, the process that's happening. How do we how are the kernel releases happen? What kind of testing that gets done on the kernel, what kind of evidence is we can gather from the kernel development activity that happens in kernel mailing lists and so on. So that's kind of an example of what we are planning to derive from that. And as we go do this, do this analysis. In some cases, we are looking to also improve documentation kernel documentation to I improve it, add it when if we find gaps, and that's part of the process as well within the limited resources we have in in Alisa, keeping in that keeping those resources in mind. And like I mentioned earlier, that we are using we're leveraging common weakness and numerations. We take the CWEs and look at what would what which one which CWEs make sense for Linux kernel that is running on a safety critical system. As an example, concurrency and locking and memory related errors and pointer errors would be something that we're closely looking at. CWEs related to those. And we take those and we define system characteristics. What kind of system characteristics do we need for a safety critical application in terms of responsiveness being able to to handle memory related errors timing and timing related errors. If some of these come out of FFI and related to take those define these high level system characteristics that we are, we need to be able to build and safely operate safety critical systems and then identifying failure modes and attributes of the Linux kernel. We are focused we our main primary focus is Linux kernel looking at that and then we identify kernel configurations and features important to safety critical systems. And like we do have we we are heavily focused on automotive and medical use cases for this work. A subgroup of kernel process development process work group is tools. What we are trying to do in that group is subgroup is looking at various kernel tools available to be within the kernel and some outside that are available to evaluate we do the static analysis on the kernel and static analysis is one of the important parts of a safety evaluation of a system. So we are looking to provide resources and derive the tools chain to say hey these are the tools you can use and for example kernel does have several safety analysis static analysis tools available. Cochinal, Cochechek and then sparse and smatch there are several within the kernel repository itself. In addition to that we are looking at code checkers additional code checkers available and then we are looking to investigate a clear results as well and we our goal is to have these continuously running on codenodes that come out releases and be able to provide this static analysis results on the CI it's on the tools investigation at least a CI that that's just kind of a stretch goal that's what we are going forward in the future. So you will you will see that happening in the next couple of um in the six months or so and we are also assisting this group is also assisting newcomers with onboarding in terms of familiarizing themselves anybody interested in familiarizing themselves in how the kernel development process works and how do you send patches and what does it mean to be part of a kernel community. So newcomers are always welcome to participate in any of our working groups and including tools in an investigation tools investigation code improvements of group. In addition to that we also have an ambassador program we have a group of ELISA members that volunteered to represent and speak about ELISA in various conferences and we're also working on putting together a material so we can share our progress as well as what we are doing what we are all about with the the community Linux kernel community and community like for example I'm speaking to you at Fostum so we will be continue continue to do that to engage with our goal is multi-fold we want more people coming in and collaborating with us from we were so that that is part of the reason why we are looking to speak to you at various places as well and then in and call up and join us for the call welcome you to join us for the collaboration and in it another important work we are doing we are sponsoring under ELISA is mentorship projects and these two we have 2020 projects that are wrapping up you can these are links you can go check check out what these projects all are all about so like for example we talked about in the previous slide code checker that are integrating kernel development tools with code checkers web UI that is one of the activities that's happening in the mentorship project and which will then become a part of ELISA CI that we can so we are continuously looking to see looking to create projects that we can train new developers as well under the mentorship program and we will be considering future projects for summer session that's coming up in addition to that we're also doing white papers and outreach so we would like you to come join us and help define safety architecture help defining safety architecture and medical use cases we are a little short on ma'am volunteers that can help us with with safety medical uses especially if we are we're focusing currently on automotive use case we have good coverage there but we are looking for help there in the medical use case um so this how are we doing all of this this is this is not possible without support from our ELISA members we we have our premier members listed here and without their help we won't be able to do what we have been doing and what we would like to achieve in the next coming years thank you very much please feel free to ask any questions i can answer them
Assessing whether a system is safe, requires understanding the system sufficiently. If the system depends on Linux, it is important to understand Linux within that system context and how Linux is used in that system. The challenge is selecting Linux components and features that can be evaluated for safety and identifying gaps that exist where more work is needed to evaluate safety sufficiently. The ELISA project has taken on the challenge to make it easier for companies to build and certify Linux-based safety-critical applications. This talk will be given by Shuah Khan from the Linux Foundation to give an overview of the ELISA project and its technical strategy.
10.5446/51873 (DOI)
Hello, Vosdem. My name is Joss. I head up marketing at Nexlath and also co-founder. And I want to talk about 2020. Now I know that's not everybody's favorite year. Totally there with you, with COVID and everything. It was a tough year. Also for us as a company and as a Nexlath community. Of course we were making collaboration software remote, collaboration software. So this was not only a challenging year, but also an opportunity for us to help a lot of other people to get their work done in a safe way from home. So well, that's definitely a big part of what I want to talk about in 2020. But of course the year started in January and that was pre-COVID for most people. But it was a big month for us because in January we did a specific announcement event that we had announced well in advance and we did a live meeting in Berlin that was still possible to announce Nexlath Hub. Nexlath Hub was a new release. And normally of course after 17 comes 18 and that was the name you would expect for Nexlath 18. But we decided to rename it. And to explain why it became Nexlath Hub I want to go back a little bit further in history. So when we started Nexlath in 2016 we made a file sink and share platform. And we were on a journey because file sink and share was not really enough anymore. People expected more. If you had a file sink and share you could put your files on Nexlath, you could sync them to your mobile devices to your desktop. But people started to expect they would get a notification when somebody shared a file with them. And they wanted to comment on a file and they wanted to be able to maybe have a chat and attach a file to email. People used it to get work done. It wasn't just something you use only occasionally with a client. And we started working on that. In 2016 we already started to integrate script which was an audio video call solution from a partner of us. But we couldn't really integrate it really natively because it wasn't like PHP and JavaScript. It wasn't a native part of Nexlath. So we rather quickly decided okay we need to build something that really works tightly integrated in Nexlath. So we started and in January 2018 we announced Nexlath Talk. That was really kind of a next step for Nexlath. And then we started in audio video calls as an app available. Just one click, installation, just works. Of course there was more needed. If you want to have video calls you often need to connect through firewalls and all that stuff. And for that we worked with our partner to develop the high performance backends that would allow you to proxy the sounds through a separate server just to make sure that you have a connection even when there's lots of firewalls in place. So there was a lot of extra complications but out of the box it weren't for most basic use cases for most users. And then well in January of last year we kind of said to each other look you know Nexlath for most users is already more than a file signature. And Nexlath Talk is widely used and appreciated by users. Our calendar and contacts app were being used by a lot of users. So it was time to kind of make it official and say okay Nexlath Talk is the next step in the journey from a file signature to a content collaboration platform. This is it. We are now one solution that has audio video calls and chat that has calendar, mail, contacts and of course the file signature built into a single platform. So this was a big deal for us to announce that. But it was more than just the concept of a file signature going to content collaboration platform. Nexlath Hub was also integrating Office. Now there was Office for Nexlath that you could install. You could have collaboration online and only Office and then you would have to install a Docker container and update that of course. Now you had to configure a reverse proxy in the web server. But that was quite a barrier for a lot of people that that isn't exactly trivial to do. So we said okay what can we do to really lower the barrier make this really easier for people. So we put in a lot of effort to make it a one click affair. You would click in the app store, install and a thing would get downloaded and a lot of magic happened on the background. But it would then work. You would have Office without having to do reverse proxy and all kinds of other complicated stuff. And we succeeded at that. We really managed to bake this. With only Office we introduced integrated Office in Nexlath Hub. So Nexlath Hub was files, calendar, contact, mail, audio video calls and Office all in one package out of the box. Of course we also improved all of these parts. And we worked a lot on talk, improved user interface, added some new features. We worked on the group pair, calendar version 2.0 came out with us a completely rewritten user interface. You could you know, you saw a busy view so you could see if your other colleagues are busy when you try to set a meeting with them. You could book resources like a meeting room. And we introduced support for booking a talk room for a meeting. So when you create an appointment you can say okay, after talk room and then automatically talk room will be created with the right name and you know, the invitation will be put in the mail, the invitation to the room. So this was really a very nice calendar app. The mail app released at first 1.0 release. So there's also a big deal of course with a lot of refinements all over the place as well as a really cool feature support for itineraries. So Nexlath mail right now will recognize when you get an email from like travel agency or your you know, air flight notification and then it would say okay, you know, this is a flight from Los Angeles to New York on this and this dates. And it would show that the pop the email and there's even a button that says out to my calendar and then you can get this into your calendar, you know, see nice calendar visitation with those dates and times. So the calendar really meant a big step forward. We did more. We were introduced Nexlath flow. Now Nexlath flow is automation solution build a Nexlath. So you can automate things like you know, if you get a file shared to you or dropped in a folder from a certain user in a certain group or at a certain time or on certain IP address, even if you want, you can execute an action like turn that file to a PDF or give me a notification or, you know, put another file somewhere or give it a tag or all kinds of stuff. So this was really a nice feature for a lot of people to really automate certain basic tasks and extra. Even bigger, I think was the introduction of workspaces. So workspaces is a space above a folder where you can give some context to a folder. So for example, let's say you're sending security patches to a customer. And then on top of the folder, you type, okay, you know, these are security patches, here's the problem. This is a workaround, blah, blah, blah, and you know, how to apply the patch, well, patch, you know, minus B, you know, and here's the patch files and then you see the files below that. Or if you have a shared folder in your team, you say, okay, this is the marketing team folder, you know, please put all the event data per year in the event folder. Please, if you have images and screenshots, put them in a collateral folder, yeah, that might amass. And so you can coordinate your collaboration by adding some context to the folders that you have for your system. It's very helpful. They can even put the do lists in, you know, checkboxes, just okay, this is done on an image, there's all kinds of stuff you can put in there. So we internally use a lot. And I think a lot of our users use it, it's just a really useful way to facilitate your collaboration. Now, we also do facilitate collaboration out of file locking. And so sometimes, of course, if you have a Word document that you have shared with other people, you can just click it and edit it. And other people can edit it too. And that's all collaborative, collaborative, and it's built into a next slide hub. But sometimes you're working on Adobe file or, you know, an SPSS file and those you can't just click and edit in the web interface and need to be downloaded, edited, uploaded again, or you do it, you know, in your file browser directly, because of the sync line, try to sync the files. And yeah, how do other colleagues know that you're editing this far, right? You don't want to clash. So with file locking, you just do click and say, okay, you know, I locked this file. And then other people can see with like a little avatar that you have locked the file. And they can even ping you with a comment or, or, you know, a chat or click on your avatar and start a video call if they want to know like how long will you be locking it, how long are you working on it and what are you changing. So you see this integration also come in to get a really nicely there. Now we also introduced an extra photos with a replacement for the gallery app that was kind of maintained with a new interface and it's quicker, faster, couple of nice features as well. So all in all, next slide hub was really like big step forward, right, integrating, you know, talk and office and groupware and adding all these new features really, yeah, we really started 2020 with a big bang. And to kind of give people my day of what you can do with next, we even created a nice video. So I want to show you that video. It's just a few minutes and then you have my day of what we built for you. Next cloud hub. Next cloud hub is the full content collaboration suite to manage all your documents and communications and all in one worry free package, so to speak with next cloud files, you can manage all your files and documents. You can download and upload view and edit them. You can find everything with full text search and by tagging. And of course, you can share documents and collaborate with others. For example, with Carl, Paulette, Mickey and many more at the same time. In next cloud photos, you can find and view all your photos and videos just like this one. Oh, that was nice. In next cloud talk, you can chat with others, work on documents and even do audio and video calls like in a real office. And best of all, you can even participate from your mobile phone. In the next cloud calendar, you can manage share and sync all your appointments with your phone and desktops. Oh, I see right now I need to make a quick phone call. But where's the number? Of course, in contacts, in next cloud contacts, you can view, manage and share all your contacts. And of course, you can also form groups based on topics just like my line dance group here. Yee haw. In next cloud mail, you can of course read, write and manage your emails. But there are also intelligent features like detection of itineraries. Next cloud hub even includes a full featured office suite that lets you edit all your office documents together with others. Do you remember Carl, Paulette, Mickey, here they are again. Next cloud hub is the powerful content collaboration platform with all the features you need. And by the way, the entire next cloud hub is open source and can run on premises. So you have control over your own data, your cloud, your data, next cloud. Okay, so there was a video of next cloud hub. I think it's really nice video, you should be able to overview what it can do. And it was a big deal, right? So well, then it began February and March and of course the COVID crisis hit and we said to each other, holy moly, we need to make sure that our users get the most out of next cloud because this is the time they needed a lot of our users, a lot of our customers, of course, started to use next cloud more and more intensively. So we focused on performance among other things because if you use it more than you know, more activity, you would need to add more hardware. Your network connection is constantly busy because you're having video calls. So for next cloud, for example, we really optimize the performance to make sure that you can have more people in the call without, you know, overloading your network connection. So it will dynamically decrease the video quality when more people will be in the call and things like that. We also introduce a grid view. So if you have a call with a lot of people, you can just, you know, see them all at once in a nice grid. We introduce drag and drop. So you can just drag a word file in a chat room and then you can click it and edit it right away because we also introduced document editing right there in talks. Just click the file and not only is it shared with everybody, but everybody can also join it, edit it collaboratively. So we really leveled up the collaboration and integration of next cloud talk. We did more work. We introduced security improvements. We introduced, for example, passwordless authentication with web outing. That's a new standard that allows you, for example, to put in security key and use that as authentication. We worked on that together with Neutral Key. As an open source hardware security key, you totally look into that Neutral Key. We did a couple of other security improvements like automatic logout and account locking and password exploration and a couple of other things. But to run it all up, and when you're really talking about performance, the biggest thing that we did for this release is that we open source the high performance backend. I already talked about this before. You need the high performance backend and you need to put it on a separate server from next cloud. So it's a bit of work to set this up. It can eat quite some bandwidth, but it makes sure that you can have a call with more than, you know, six, seven, eight people at a time because it bundles all the video streams that you get and send. So this really improves the scalability of Neutral Talk. And as was developed by Stratuur, partner of us, and together with them, we managed to open source this and made this available for everyone. So it was really cool, I think. So again, Neutral 19 was also a big video with the open source high performance backend, with a better performance, with a great view, with all the security improvements. And so we also made a video of this release and, you know, just to give you an idea of everything that had changed and improved and how wonderful Neutral Talks do work for mom. Next Cloud is a single platform that keeps you productive while your own system administrators manage it in-house. Next Cloud Home Office means to have comfortable access to everything. Manage your emails. Keep your schedule up to date and create video chat rooms for appointments. Date with chats or video calls one-on-one or in a team. Share documents and work together on them in real time. No mess, 100% control and transparency. Next Cloud Hub does it all in one package. Yes, that is how I would love to work. Easy and relaxed from anywhere. Next Cloud. So summer was over, but unfortunately COVID was not and it became fall. And fall is normally when we do the Next Cloud conference. So unfortunately this year our conference had to go, well, online. We set up a Next Cloud instance and gave all the visitors an account there so they could chat with each other and with, of course, the team. We had pre-recorded all the talks just like here at FOSDEM and then a live QA session after that. Yeah, it was a bit of a challenge, of course, but that's, well, what a difficulty it does. But at the conference we, of course, had once again a big event. There was a big keynote talk in which Frank announced Next Cloud Hub, first in 20. And this was again quite a big release. So Next Cloud Hub 20 introduced three main things. And first, the dashboard. So dashboard would be kind of where you open your day, where you can see like files that were recently shared with you, chat messages you got, maybe new emails or the tasks in your calendar. And it's just how you start your day. Really nice. You can put your own widgets there. You can change the background a little bit. Made for a very nice, I think, very popular addition to Next Cloud. Now we also introduced unified search. So Next Cloud has a search on the top right, but it used to search just in each application. So in the files it would show you files and in the calendar it would show you calendar icons and it wasn't available in a lot of apps as well. So what we did is we kind of created a unified search that would always show you the results from all the apps. So even if you were just in a dashboard and you would use in the search, you would type something, you'd see chat messages and calendar invitations and emails and of course files and all other content. So that made it, I think, a lot more useful and helped you find your stuff in Next Cloud. The biggest thing that we did is all about integration. So we wanted to make one integrated whole out of Next Cloud so that people aren't, of course, only using Next Cloud. I mean, you are maybe studying and then you use Moodle or you work at a software company and you are spending your day in GitHub or GitLab and Jira and maybe you do support tickets and you work with Zomad and these are all other platforms. What if they are integrated in Next Cloud? That would be beautiful, right? So that's what we worked on. We integrated my other things Moodle and GitHub and GitLab and Jira and Discord and Zomad and we made it possible for them to provide results in the search so you could see your Moodle courses with the search and you could see GitLab issues in the search and Zomad tickets etc. We also created dashboard widgets for them so you can in the morning see, oh, you know, we are my urgent GitHub issues and oh, you know, there's like a course I need to go to later today. And also we worked integration with talk because of course if you use talk then you can talk with other people on talk but some of your colleagues might also be using Slack or you might privately have Microsoft Teams account that you need to use for, you know, your local soccer club or you might hang out on IRC where you have a lot of friends and we wanted to bring those closer to talk and of course you can't make everybody use talk so we introduced the bridge and with the talk bridge you can connect the talk room with one or multiple non-talk services and even another talk server so whatever somebody says in Slack will then show up on talk and also on Teams and everything else that you connected of course you can connect multiple rooms to multiple channels so that you always have a bridge to IRC channel for support and so on and this is of course really nice piece of integration that we did there. Now there was DOL next up 20 was really a big release we also introduced status so you can set your status and say okay, you know, I'm currently busy with the presentation please do not disturb etc. And this is the visible in talk for example but also in files and in the calendar and the mail client other places. We introduced integration between the calendar and the tasks as well as the deck app so deck is a common word a little bit like Trello where you create little cards with a task on it and then you put it in the to do or in the work and progress or finish section and you can put it you know with due date on it you can put a little to do list in it and assign it to somebody and well with the due dates it becomes quite relevant for the calendar of course you want to plan your day and see your tasks and your calendar items and your you know cards for the day and that is possible you can see the deck items in the calendar as well based on due date. We also improved flow so you might remember in XF18 we introduced flow to automate things and we added push notifications so that when a file is dropped in a folder you get notification on your phone and we also added webhooks and webhooks can be both triggers or something you trigger so webhook is something that allows an external thing to hook into for example home automation so you can say okay when somebody is at my door get me a notification on my file via next cloud or when somebody puts a file in a specific folder then turn on the light in my room you know you can do all kinds of automation things with next cloud this way I think that's also a really nice addition there were some other features like adding link links to files in extra text and text are collaborative text editor and very popular and there you can add links it uses markdown on the back end and you can also link to files on next cloud so if you click them then you go directly to that file and you can edit that and also add another feature to add a description to public links so next cloud you can create a public link and send it to someone and they can download the file they don't need and they can't you can create multiple public links to the same file so you can say okay to this colleague I sent you know public link to this file but they can also change things they can upload a new version of the file in a folder maybe well to somebody else external partner you send a read-only version and maybe to another partner you send a read-only version where they can have to put in a password and where you have an expiration date you can name each of these so that you can identify them say okay this one I sent to my colleague and this one you know to the partner and this one to the other partner so this you know keeps you keeps your sharing a little bit more under control so all in all there was a lot of big epic changes in 2020 well the fitting a year that was big and epic of course in many ways and I hope you got a bit of an idea of what we did and if you're currently using next cloud 20 you will of course already have seen a lot of these features although you might not even have tried them so I would encourage you to go try them if you haven't yet upgraded you should because you know keep your data secure and this is obviously also good because more people use next cloud 20 than are older releases which means it gets more testing so it's more stable and of course we are working on next cloud 21 which comes in a couple of weeks I'm not going to tell you anything about it because that will be teasing too much but you should definitely keep an eye on our social media and our website so you can hear what's coming when it is there alright thank you for them and time for questions and answers.
This presentation will go over what Nextcloud introduced over 2020.
10.5446/56990 (DOI)
Hello, everyone. This is my third talk at FASDEM, a small audition, I guess. If you are not familiar with OpenCascade technology or CT for short, you can watch my previous talks about it. This year, I will tell you about the technical side of things. My colleague Vera will bring community-related aspects. In this talk, we cannot highlight each fix and improvements contributed to the kernel, but 7.6 introduced numerous fixes, including advances in step translator, phase maximization tool, Boolean operations, extrema, offset, IE's, etc. etc. But here we indicate some essential updates. Since 2013, 6.7 released. OpenCascade has established a yearly release cycle of the framework in response to community compliance about rare releases in the past. As timely scheduled releases, minor version bumps are not expected to emphasize specific improvements. Still, there was a new revision of OpenCascade technology, accumulating ready-to-use changes at the moment of the release, with some minor stabilization effort. This practice is close to a rolling distribution model adopted by many other projects, though some of them are also switched to year.month versioning scheme to indicate this. Actually, we do not plan to change our versioning scheme. 7.6 release adds more options to configure building processes. Now it's possible to build it without Xlib, for Linux, of course, without free type and DK. Users can take advantage of this improvement to better adapt OCT for their needs. We work hard to maintain as many platforms and compilers as possible, but supporting of two old principles plus 11 compilers restricts project evolution and makes no more sense today. So support of the Visual Studio 2008 has been finally discontinued. Scaling in top-of-the-s shape was the root cause of various defects in our modeling algorithms. Tolerances associated with geometry careers, americ vertices, adjacent faces, refer to non-scaled originals. Geometry scaling may lead to inconsistencies between formal tolerances and real ones. The result kernel now forbids assigning scaling to top-of-the-s shape objects. If one really wants to embed scaling, it can be done through B-Wrap builder API transfer class. B-Wrap and XCAP, the native formats for storing OCT geometry without conversion, now preserve vertex normal information. 7.6 can read both old and new versions of the format. Here we use the good old backward compatibility principle when a newer version of software might deal with older versions of files. Extrema package and its topological equivalent calculate extremums between geometric entities like the minimal distance between curves or projection of a point on a surface. In 7.6 we introduced memory footprint and added parallel execution. These two are relevant to B-Wrap extrema package. Support of trained curves is added for the curve-curve case of extrema package. Like in 7.5 we keep working on the progress indicator. We added two Boolean operations, set algorithm and B-Wrap extrema. OCT had two implementation of the Boolean operations. We informally call them old and new. The old implementation has fundamental defects that cannot be resolved. That is why we stopped supporting it and now getting rid of them. This release removes the API of the old Booleans preventing their accidental usage. Pollet package, receive new methods to intersect mesh with an axis and triangle with an axis. A new algorithm for accurate order independent transparencies added. We added these figures to make this talk not so boring. The left image shows no order independent transparency. The middle image shows weighted order independent transparency available in previous OCT versions. And the right image shows depth pinning implementation. Visualization now provides a simple but fast method for drawing shadows from directional light sources. We keep working on improving compatibility with embedded and web platforms. And now visualization core is covered by automated tests of OpenGL ES driver in addition to desktop OpenGL driver. We are looking forward to see more community projects using OCT as a web assembly module for modeling, data exchange and visualization. This release brings a reading support of drug-accompress GLTF files. It is the lossy algorithm giving an outstanding compression ratio for GLTF files. Also entities representing stepped kinematics are added. A writer for option format is added. Minor improvements include OSD file system class for unified and extendable C++ file stream operations. Card models are much more complex than they might look. Assemblies, insensing, transformations, nesting, references and other particularities complicate assembly level operations. One typical problem when you deal with assemblies is subassembly extraction. Now this operation is harnessed in OpenGL technology. Partial loading of a CUP document reduces the reading time by loading on the attributes of interest. In total, overhead is nearly 15% when the whole document is read. Additional details on the topic can be found in our blog post. We retrospectively versioned our previous CUP documents and added writing support when possible. This enhancement extends data interoperability between applications using different versions of a CUP documents. What we are going to do? First of all, we plan to drop support of pre-C++11 compilers. As mentioned earlier, by the way, you can participate in a poll created by my colleague Kirill related to this question. Modeling doesn't need any special mentioning because it's the very heart of the kernel. We definitely will keep working on it. In visualization, we are looking forward to simplifying its usage. At the same time, new features might be implemented. In data exchange, we really want to add thread re-entrancy in step translator. Reading of the slated geometry from step is highly demanded, but I'm not sure that it will be finished in 7.7. We know that our documentation is a weak point that could be dramatically improved and we spend noticeable efforts to make it better in each release. In 7.5, we reworked the overall structure in 7.6. We restructured samples, added a new QT-based sample, and added an advice guide. This work is far from completion, so that is why you can see it here. My previous talks were a bit generic. This time, I'd like to make it a bit more personal. Well, I'm planning to take a part in dropping support of pre-C++11 compilers, maybe on some modeling activities. Also, I was reviewing most patches related to documentation, so there is no reason to stop this practice in the future. And you know who is blame. I'm currently busy with general activities related to getting rid of unused headers, forward declarations, and friend classes in my spare time. It is small, but I hope that my Raspberry Pi 3 would build OpenCascade technology in less than 3 hours. Right now, I want to turn the floor over to Vera. She has something to say. Thank you, Alexander. My name is Vera Stabnov. I'm OpenCascade Technology Community Manager. I'm going to share with you new opportunities for OCCT users. The first ones are related to isn't access to technology. OCCT is known to be quite a complex framework, so the entry threshold is quite high. Last year, we've put some efforts to ease it. First of all, having received numerous requests from the open source community, we decided to publish free trainings to ease the access to OCCT. Free presentations cover preliminaries, geometry, and topology topics. You can find them in the training and e-learning section of OCCT Development website. Secondly, we introduced a novice guide as a part of our documentation to attract new users and help them with OCCT onboarding process. As a part of continuous documentation improvements, this year's sample section was fully restructured and organized in a more logical manner. On top of that, highlighting for code snippets was finally added throughout the documentation to make it more user-friendly and easier to work with. Last year on FOSDEM, the subject of tasting dataset was also raised. We've received requests from the users interested in launching OCCT automated testing system with more shapes in addition to basic testing options. As a result of extensive analysis, the dataset consisting of more than 2,500 shapes was published simplifying testing process for the users and contributors. With this public shapes, OCCT test coverage raised almost twice, up to around 60%. One of the major OCCT community improvements last year was new developer website launch. It brings a couple of nice features like single sign-on and forum smerging. Earlier, our users could be confused choosing the right forum to ask questions. Forum structure has been revised to improve navigation and subscription to interesting sections instead of a single section for all topics. New website also features an updated Get Involved section expanding the ways to contribute to OCCT development. In addition to the code contribution, we encourage users to share their ideas and vision on OCCT development, help to educate others by writing articles, blog posts and creating samples, contribute documentation and tutorials and just spread the word about OCCT. We've also introduced two big website sections. The first is recently launched OpenCascade Technology Project and Products Marketplace. This new section allows sharing information about OCCT-based products. Being a single access point to already more than 30 projects, it is easy to enter into technology and provides one with a wider look at OCCT capabilities and application areas. We invite you to explore the marketplace. If you work on OCCT-based project or product, we encourage you to share your experience with the community by requesting to list your project. In the future, we plan to provide the project with official OpenCascade Technology Partner status, including wider opportunities for marketing and promotion. One more new section is Research and Science Publications Listing. OpenCascade technology is actively used at academic and research levels around the world. To provide the insight into it, we'll launch a new section devoted to OCCT-based research projects and articles. It already gathers more than 600 of research and scientific works from prominent universities and organizations in more than 40 countries. If you also use OCCT in your research or scientific project, you can request us to list it in the collection to increase its visibility and share it with the community. To get an idea of OCCT-spread and research and science, you can also explore our recently published Infographics Report, which introduces various aspects of OCCT application by universities and even commercial companies around the world. Of course, we plan more exciting OCCT activities this year. We plan to implement fully digital workflow based on Tokyo Sign eSignature service to make the contribution process easier and faster. We are working on OpenCascade technology rebranding as the project is evolving. We plan to launch a regular technical blog and welcome the external project authors. Some of you already know about it and agreed to participate, so if you are interested in preparing an article for OpenCascade website to share the experience of using OCCT in your project, please contact us via the contact form. As always, we will be very glad to hear from the community. Contact us to share your thoughts, ideas for collaboration, what tell about the project you work on. Looking forward to hearing from you. We'll get the mainstream and we'll start in just a couple seconds here. I'll introduce you. Welcome. I'm here talking with two members of the OpenCascade team about the new releases and some of the features that they were discussing in their talk. I'm really enthusiastic to hear about all of the new community changes that the OpenCascade team is working on. Vera, can you say a little bit about the reception that you're getting so far and what you'd like to see develop with that going forward? Yeah, thank you, Seth. I'm community manager at OpenCascade and I think one of the major improvements were opportunities for community in 2021 was launching the marketplace for projects. Currently, it's free and everyone who is working on a project based on OCCT can join, can request to be listed there to promote Heath of Hell projects. For us, it's actually very important to have this connection with communities and with people who are actively working on implementing OCCT in their projects or products. For the next year, we also plan to launch technical block, which I'm really excited about. Currently, we have nine authors who agreed to participate in the blog, so authors from external projects. If there is anyone who wants to join or to prepare an article for OpenCascade website, please contact me. I'd be really glad to feature your article there. We also will have a rebranding of OCCT. It will not be huge, but still very interesting and some more nice features, CLA, digital process, as we mentioned earlier and so on. Stay tuned. Excellent. I saw that this you are making some very big strides in the visualization side of OpenCascade. Alexander, can you say a few more words about where you're pushing that part of the technology? Well, well, well, well. I'm not a visualization guy, but let me try to answer. Visualization, it's, I think, one of the most important things because in the broad broad, what you see is not what you have, because when you have a three-dimensional model and it is some kind of mathematical concept and without visualization, it's quite difficult to move forward and our efforts to visualization are aimed at revealing of technical things of new rep modeling, I guess. Okay. No, that makes sense. On the back end side, you've introduced WebGL as a rendering engine. Do you have any plans for additional back end engines? Conscious, Farzano, no. The when people want to contribute more to the OpenCascade project, I saw that your barriers entry are being lowered. Are there, do you still have the multi-step facts, the authorization form to OpenCascade or have you updated that to different models? So, yeah, we still have this multi-step CLA signing process, but hopefully this year, very soon we will replace it with the fully digital one so it will be much easier and faster. Excellent. And real quick, is there one feature that you're most excited about for 7.7? Alexander? Actually, I'm, I will be happy to see improvements in step translator related to thread or accuracy. Excellent. And Vera? Good question. As for community, like more activities, more contribution, more bugs reported and not only technical bugs, but maybe improvements in documentation if everyone can help. Thank you.
Open Cascade Technology (OCCT) is a framework for B-Rep modeling. The lecture presents a technical update from the previous talk (at FOSDEM 2021). This year we also introduce our OCCT's Community Manager who will highlight community-related activities that happened during 2021.
10.5446/14136 (DOI)
Hey everybody, this is Deepak Khatri and I welcome you all to my FORGE DEM 2021 lecture. In today's lecture, I am going to talk about low cost, open source, hardware design for bi-potential amplification for the purpose of neuroscience, prosthetics and more. Now without wasting any time, let's get started. Starting with the presentation now, let's see what you are going to learn today. At first, we'll see what are bi-potentials and I'll talk about EEG, ECG, EMG and EOG. Later, we'll take a look at how to design hardware for bi-potential signal amplification and at last, we'll discuss why do we need affordable bi-potential amplification hardware. Now let's take a look at what are bi-potentials. Bi-potentials or bi-potential signals are basically electrical signals or voltages that are generated by physiological processes within the body. They are created or generated by a type of cell called excitable cells. Excitable cells can be found in our brain, muscles and heart. So basically they are found in our nervous system, muscular system and glandular system and when an excitable cell is stimulated, it generates an action potential which is the essential source of bi-potential signals in our body. Frequences of the bi-potential signal generated by our muscular system, nervous system and glandular system are different, thus require different setup for the recording. Some of the procedures are electroencephalography or EEG for brain, electrocardiography or ECG for heart, electromyography or EMG for muscle and electrooculography or EOG for eye. All these signals that are generated by our eye, brain and muscle and heart can be recorded non-invasively on the surface of our skin. Basically these procedures record bi-potential signals that reaches to the surface of the skin from excitable cells. Most of the signal gets attenuated when it reaches the skin and for severe medical purposes we tend to use invasive techniques so that the resolution is high and we get much higher signal strength. Now that we know that our body produces different type of bi-potential signals, sources of these bi-potential signals include skeletal muscles, the heart, the brain, the eye. Each of these organs produces electrical signals or action potentials that can be recorded on the surface of the skin. Now let's take a look at how these action potential really occurs. So the action potentials occurs when the gated ion channels briefly opens. This depolarizes the membrane as sodium ions and the potassium ions equilibrate between the plasma membrane and causes a spike or a change in potential difference across the plasma membrane which triggers the opening of neighboring voltage gated channels thus creating a voltage spike or the action potential. Now that the basic biology of bi-potential signal is covered, let's look at the next section which is how to design hardware for bi-potential signal amplification. For the proper amplification of our bi-potential signal, there are some requirements for our bio-amp. These are following. Input impedance of our bio-amp should be high, high enough to match the impedance of our skin so that the already attenuated bi-potential signal should not attenuate further and the input of our bio-amp should not load the skin. A high CMMR or common mode rejection ratio is required for our bi-amp to get a clean signal. Basically, 100 dB or more is required so that common mode voltages get cancelled. That means interference coming from mains and other interference channels will be cancelled out before reaches the next stage of our amplifier. The output impedance of our bi-potential should be low so that we can create or attach a cascade of amplifier units for further filtering and further amplification of the bi-potential signal. Our bi-potential amplifier should provide us with a flexible gain control so that we can easily record EEG signal which are in microvolt range and EOG, ECG and EMG signal which are in millivolt range. Our bio-amp should provide some way of target protection. Either we isolate the system from the mains by using a battery or we provide the input with ESD protection diodes or high voltage protection diodes. Although it's not a requirement but our bio-amp should be a very low power consumption device so that we can run it off a battery for a longer period of time. Now to achieve high input impedance, high CMMR and low output impedance, we use a special type of amplifier called instrumentation amplifier. An instrumentation amplifier is basically a combination of two or more operational amplifiers to achieve high input impedance, high CMMR and low output impedance. Let's see the designs now. The two open amp design looks like this. This is taken from Texas Instruments Analog Engineer Circuit book. This diagram can be found on the page number 27. This is basically the circuit which you can call bio-amp and you can use this to amplify your biopartition signal. This is the other design which is most commonly used for biopartition amplification units. This is a three op-amp design which is generally better than the two op-amp design but two op-amp design has its own benefits. Although instrumentation amplifier is the most important part of a biopartition amplification unit, we also require circuits for interference reduction called DRL circuit or Driven Light Light circuit and Shield Protection circuit. They are essential if we are going to use our biopartition amplifier for medical purposes and they are also very essential when we are going to use them for neuroscience research. Now let's look at a hardware platform I created using this three op-amp design. It is called bio-amp version 1.5. You can find it on the upside down labs github page. It's on github.com slash upside down labs slash bio-amp underscore version 1.5. This is how the board looks. Let's take a look at the schematic. This is the schematic. You can see we are using a very basic three op-amp design of instrumentation amplifier with a gain of 45. The circuit you see here is acting like a DRL circuit or Driven Light circuit and it is also providing a Vref for the instrumentation amplifier. Vref is basically the middle of the potential, middle of the voltage which we are supplying to the amplifier. This is based on a single chip with quad op-amp called TL074. It has JFET inputs. So the throughput of our device is very good and this device can be used to record EEG, EOG, EMG and EKG signals. This is how the actual bio-amp version 1.5 looks. You can basically use it in three simple steps. Connect the battery, connect the electrodes to the muscle you want to hear from. Then either you connect earphones to the earphone jack or you can use the output cable to record it on your smartphone or your laptop. Here I have started the spike recorder software on my PC and notice the signal when I flex my muscle. Whenever I flex the muscle you can see the signal on the screen and I am also recording the signal with the video so you will be able to hear what you will be able to record and use the device. Notice the signal when I move my eye. When I move the left eyebrow nothing changes but when I move my right eyebrow. Now to record EEG signals I have attached the signal electrodes close to my heart and the difference electrodes is above my left is above my right leg and I will start the device now. Now you can clearly see that we are recording EEG. Let's try to capture a single potential. You can see some EMG artifacts when I breathe. The fact that our device can record EEG in itself is very great. Let's try to change the frequency band. I will make 0 to 6 to 7. My EEG signal is much clearer now and you can see all the peaks. This is much more like an EEG now. You can also change how the signal looks by placing the electrodes near your heart at different places. Recording EEG with such a small device is very complex and literally very hard because EEG signals are in microvolt range and the major reason is that we don't have access to electrodes that can tap into the areas where EEG is more prominent like I can record by placing the electrodes on my forehead but if I want to tap the visual cortex it would be better if I record them from the back of my head above the Indian. Now let's look at a hardware design which is not my own. It comes from back at brains and we will try to make it affordable using the techniques or the circuits we learn till now. This is the schematic of their EMG Spiker Shield version 1.7 and you can see they are using a dedicated instrumentation amplifier called AD623 and a dual operation amplifier for filtering the output and further amplifying the biobotential output. So what we can do here we can replace this instrumentation amplifier with a 2 op amp design of instrumentation amplifier using the same TL0-Sanford chip and use the third chip and use the third operation amplifier to create a similar filtering circuit and the fourth operation amplifier to create a similar amplifying unit. So basically we can use the same TL0-Sanford which is very very cheap to replace the same circuit and basically we will get a very similar output because the TL0-Sanford chip as I have used it is very very good at amplifying biobotential signals due to its JFET inputs and I believe we can create a similar device using the single TL0-Sanford chip and I will try to create a similar device by the time of this talk and I will share the link in the chat box. Now that we have seen how biobotential signals generates and how we can create affordable hardware to basically amplify in our biobotential signals the question arises why do we need affordable biobotential hardware. I as a biotechnologist who happens to love electronics and computer science believes that biologiev technology is the next big revolution and to make sure that everybody gets the equal opportunity to show that they can make a change we need a supply of affordable development tools and biobotential amplification is the basic front end we need to create those development tools. To kick start the journey of creating affordable and open source biobotential hardware Upset and Labs is going to focus on creating research grade neuroscience hardware and affordable analog front end for prosthetics control and I would love to see all you guys supporting me at hithub.com slash Upset and Labs. Creating affordable neuroscience and prosthetic hardware are my main goal but I have some things in mind which I want to see in real life. I want to make a digital telepathic system or help making a digital telepathic system which is basically a brain computer interface which can detect what you are thinking using machine learning and the second one is my personal favorite it is called the butter robot so the entire purpose of the existence of this butter robot is to pass me the butter but the cool thing is its brain is made up of living tissue of neurons trained to pass me the butter and has the ability to sense the environment using vision orientation and comes with the capability of motor control. That was all from my side let me know what you guys think about it and if you have any questions put them in the chat box and we will take a look. Thanks bye peace. So the thing is if you are going to create very good instrumentation amplifier you will require very precise resistances about 1% or better they typically use 0.01% resistances which are like a laser cut them into the chip die itself so those instrumentation amplifier are very good but they are very pricey they are like some instrumentation amplifier cost about 50 cents they are good but for a very clean output you will require instrumentation amplifier which costs around like 10 dollars and more. Exensements has created some very good bio amp front ends they are like ADS 129 to ADS 1298 this kind of stuff that's good but I wanted something to teach students and learners how the internals of this kind of instrumentation amplifier and biopartition amplifier works that's why I created these projects and I think the output is clean enough to use it on your use with your projects the EMG signals are very clear EEG not so good but I am working on that ECG is also very good so ECG and EMG you can start working with those right now but if you are into neuroscience and EEG more good stuff more awesome stuff is coming to the GitHub organization page up there on the web so I will share them with the community soon. So the trade off between speed the precision and clarity of the signal is very clear here because the signals look very clear because I have put some very good band pass filters so the 50 hertz noise is very prominent if you are standing close to a live wire or something like that and because there is no notch filter for 50 hertz or 60 hertz you have to do all the that kind of stuff in your software so by creating this hardware I also wanted to like create a spark in students to design their own filtering algorithms and things like that so that they can just take the output from the hardware itself you can create within 50 cents the entire hardware which I have shown it's very costly on Tindy and I am not proud of that but the thing is all the stuff creating PCB and all that takes the output is not like relevant but the thing is the output is clear enough to be used in a school project or even in a college project but for neuroscience research I will be posting some more good stuff. So I noticed that you were in the schematic you were showing it looked like you were dealing with a fixed gain on the instrumentation amplifier did you tune that specifically to each application or did you find that you had to change that gain as you were going along is there any benefits to tuning that so if someone wanted to go out and build your circuit and start experimenting with this do they have some gotchas there? So I first created this circuit in 2017 and I improved on it over the years so what I got to know is if we are going to keep the gain above 100 it will not work for everybody because the skin impedance will not match and you will not get any output and the second thing is 45 gain is in a range where you can use it with your earphones and it will not blow the speakers and you can also use it with your mobile phone and your laptop and your DAC will not be blown so that's why I kept 45 because all the EMG signals and ECG signals will be recorded and if you want to do some EEG stuff you will be required to create some extra filtering and amplification circuits which are very easy to do and if anybody wants I can share some very basic circuits which you can create with something like LM358. I think that you would find a lot of active takers I know Eric Herman who presented an open hardware dev room last year has some interesting projects as well with the open electronics lab so I think those two definitely have some commonalities associated with them. For folks who are interested in this sort of work and would like to contribute are you accepting pull requests on your GitHub? What should they submit their PRs in? Is this key CAD schematics that you are taking? At this moment the hardware is kind of almost steady but the software is not I am still using the back at brain software but you can only record and get the wave output but I want is a Python script or a Python GUI where you can record it and normalize it so that the signal can be used in a machine learning model so that if I am moving my eye I can record the signal and train the model so that whenever I do the same step something happens within the computer and things like that so that is not complete so the software is the main challenge. Your focus right now it sounds like is building out the training signal processing and then interpretation of that output. Is that a fair statement? Yes, I also need some help in hardware like BioMP is in itself complete but we need something we need something above it so that we can use it for EEG. The end part which is the filtering and amplification step is not complete and further amplification for like say a speaker can be added or many modules can be created smaller modules. The EMG is not tested yet I am working on that I have already created the PCBs at home and I will create a video about it but this will be the first step in creating an open source hardware platform for creating prosthetic arms so I will be using STM32F104 with 16 channels of 12-bit ADC and 16 of these biopartition amplifier to create a platform so that anybody can use it to create prosthetic arms. I won't help in that. That sounds great people can reach you at the GitHub repository if they are interested in helping with the software development on the prosthetic arms. Yes, also I have shared a link for the website you can submit any questions whatever you have you can share that any feedback also. That is really fascinating work and where do you see the trade-offs you have talked a couple of times about the need for filtering and obviously any of the electrical signals coming out of our bodies are going to have a lot of crosstalk a lot of additional noise on their signals. Do you see those as better addressed in hardware or is this something that you are interested in just sampling as much as you can to get it into the software side and then doing your signal processing purely on the software side. So the thing is if you are going to create a general hardware to record all the ECG, EMG, EKG type of signals we cannot include the filtering circuit within the product itself or the project itself so we have to do all these type of filtering in the software but let's say if you are going to record only EMG we can set the bandwidth we will be recording from 300 hertz to 1 kilohertz and all the 50 hertz, 60 hertz noise will be gone so there we can include the hardware filtering circuits but for a generalized ECB we cannot do that. So you might have some signals of interest coming from the body at around 50 hertz or 60 hertz that you want to necessarily filter out beforehand. The obvious question that comes up then is how do you deal with the dynamic range on your output? Is that sampling, are you doing that through purely sound card sampling or are you looking at a different digitization scheme? For now the Biome version 1.5 uses the DAC of your computer or your laptop but the thing is if you are going to use it for prosthetic arm we cannot connect it to the PC or something like that so we will be using ADC of the microcontroller itself and the dynamic range depends on the frequency band. You can control the range you want to fit within the software. The back end range allows you to set the frequency band and you can record it. The bandwidth there is obviously very important. What sort of microcontroller are you looking at using for this project? At this moment I will be using STM32F23 which is the microcontroller on bloopy but MSC432 is a great option but it requires some extra components and I cannot create PCBs like this at home for MSC432. It becomes more difficult once you start having multiple layers at home. So Deepak thank you so much for a really informative talk. I hope people look at this and see some interesting places where they can contribute to a really fascinating interface between our wetware and hardware in the real world. I will post a link to this chat in the main chat room and people should come in here. We should stay on the conference and people will be able to follow up with the intro.
The term brain-computer interface is well known among engineers, tinkerers, and specifically among researchers. Companies like BackYard Brains made it accessible for all at a relatively affordable price, with their initiative of neuroscience for all. The price of their device and the ease of usability it comes with is pretty good for a school student who's just looking for an introduction to neuroscience but, the resolution of Arduino's ADC (10bit) doesn't allow it to be used for any real neuroscience research project. The company called OpenBCI also creates some good hardware for Biopotential amplification (4/8ch 24bit) and their hardware is much more capable but, it's very costly and certainly not for everybody. As an engineer myself, I believe we can create much cheaper hardware for Brain-Computer interface devices than currently available in the market without losing any signal quality. I have started working on some prototypes already and one of the devices is called BioAmp v1.5 which takes benefit of the already available high-resolution ADC input of your computer which is normally used to record audio. The device basically converts the muscle into an audio source and provides us with the option to directly listen to it using earphones OR to visualize/record the signal on our mobile/laptop using Audacity/BYB Spike Recorder.
10.5446/53302 (DOI)
Hi, good morning, good afternoon, good night, and thank you all for joining me for my talk about type annotations in Python. This talk is intended for developers that are not necessarily familiar with MyPy and want to learn and for developers that have some experience with MyPy and want to try and learn a bit more about typing in Python. My name is Khaki Benita, I'm a developer and I'm also a team leader and I'm a big fan of Python and a big fan of MyPy. So today I want to share some of my experiences from working with MyPy. So what is MyPy? We all know that Python is a very dynamic language. You can do just about everything with it. And unlike other languages like Java, you don't have to declare types for variables, for example. Now this has some benefits but it also has some drawbacks. So MyPy is supposed to bridge that gap between dynamic and static type language and provide the benefits of static typing in Python. To start using MyPy, all you have to do is install MyPy with pip. And then you can start annotations to your code. We are going to look at the syntax more but this is basically it. And after you annotated your code, you can use the MyPy from the command line to check your script or your project or your Python files. In this case, we annotated the variable s as an integer and assigned a string to it. So when we ran the check with MyPy, we got a helpful message saying that the variable expected an integer but got a string. So if you are familiar with other types of static analysis tools or maybe even linters, then you are already familiar with the type of integrations that these tools have. MyPy has a lot of integrations but two of them that I find most useful is the integration with the code editor where you can see warnings as you type. You can see the message here from VS Code. This also enables the editor to provide things like auto complete, better auto complete for a better development experience. And you can also integrate MyPy into your CI pipeline. This is a screenshot from a GitHub. You can see that the same message appeared here below. So if you are using tools like Flake, PyLint or maybe other linters, you are already familiar with how this works and it should be easier for you to incorporate it in your process. So MyPy has a lot of configurations. I'm not going to get into it. I just wanted to put it out there that you can control exactly every little byte of it. But I just want to say that the default configuration is pretty restrictive, pretty lax. And if you want to enable all the checks, you can do that by providing the strict flag. So let's go over some of the basics of how to annotate variables using MyPy. So first we have the primitive types that we all know from Python already. You have integers, floats, boolean bytes and strings. So this is a simple syntax used to annotate a variable. You don't have to assign it right away. But and you also don't have to actually annotate everything because MyPy can understand by himself that and in this case is an integer. But this is the syntax of how you annotate a variable. You can also use objects. For example, a UUID from the Python UUID package model to annotate variables. In this case, UUID is a UUID. And another common example is daytime. If we try in this case to assign the variable D, which we annotated to be a daytime, as a date, we get a helpful message saying that we expected a daytime, but we got a date. So that's helpful. MyPy also includes types for different collections that are available in Python. This includes list, set, topple and dip. We can also see another component of the syntax and that the square brackets. Using the square brackets, you can compose different types. In this example, you can see that L, the variable L, is a list of integers. And the same way you can define a set of strings. The topple can accept any arbitrary amount of types and each type corresponds to the position in the topple. So the first argument is int, first variable inside the topple is int, string and boolean. In dict type, you can annotate the type of the key and the type of the value. So that's pretty helpful. Now we already mentioned that Python is a very dynamic language and MyPy also comes with a set of predefined protocols. Now we're not going to talk a lot about the protocols, but I just want you to know that there are a few very useful protocols that you can use. One example that I use a lot is iterable. This is basically any object that implements the function iter can be used where the variable is annotated using iterable. So in this example, the variable a is an iterable of ints and it will accept topple's list and sets. So if you have a function, for example, that needs to just iterate a collection, you can use iterable and it will accept any of the three types. When you want to indicate that a variable can be none, you can use that, you can do that using the optional type here and is an optional integer. So it will accept either one or none, integer or none. You can also use that along with collection types to create, to define more complex types like a list of optional strings, like in this example here. Files is another very useful type. I often confuse images and zip files and text files. So with mypy, you can use the io type and inside the square brackets, you can define how the file was opened. So an image can be an input output stream of bytes because it was opened using the RB flag and the text file can be annotated as Io str. And now you know what to expect when you open and try to read the file. Mypy also allows you to annotate functions, both the arguments and the return type. In this example, we have a function called concat that accepts lists of strings and returns the concatenated string. If we try to use that as expected with a list of strings, it's okay. And if we try to use that with incompatible types, we get a helpful message. If your function is using the yield keyword to return a generator, you can also communicate that to mypy and annotate using the iterator type inside the square brackets. You put the type that the function is yielding. In this case, it's an integer. If you need more control over the generator, maybe send something or it returns something, then you can also use the generator type that accepts three arguments like this. And if you want to annotate a variable to be one of several types, for example, you want a text variable to be either bytes or string, you can do that with the union type. So the union type communicates the mypy that this variable can be either bytes or string. If you try to assign a different type of expression value, you will get a helpful message like the one below. Finally is also very useful. This is about as close as you can get to a constant in Python. When you annotate a variable with the type final, mypy will warn you whenever this variable is being reassigned or redefined or someone is trying to override it. Here below you can see that we have a constant with the tax rate for what and someone is trying to reassign it and we get a helpful message. Mypy also comes with a set of generics that you can define more, you can use for more complicated scenarios in this. The best way to understand generics is with an example. So in this case, we want to annotate a function that accepts a list of any type and split that list in the middle and returns two lists. So this function is not operating on the values inside the list, it operates on the list itself. So we don't care what the type of the list is, we just care that it's a list. So to communicate that to mypy, we can declare a type variable called t and we tell the function that we accept a list of type t and we are returning two lists of the same type t. Now we can use this function to split a list of integer, a list of strings and basically any list we want because the function does not care about the actual value t, it just cares about the fact that it's a list. And finally, if you are not sure or unable or you have some kind of problem and you want to bail out, you can always annotate something with the type any and mypy will basically allow anything on that type. You should probably use that as a last resort, but there are some cases that you would find this helpful. So mypy is evolving, mypy started way back in Python 2 and there was a lot of changes, it's constantly improving and because mypy eventually is syntax, it's very easy to show the progress. So way back in Python 2, you had to annotate using comments like this. A famous story is about Zulip, they integrated typings into their project and they wanted to do it in a way that supports both Python 2 and 3 and they used this form of typing. So starting with Python 3, you could start an annotate function like we saw before. And in Python 3.6, you could start annotate variables like this. Now, you might notice that in Python 3.6, the type task here is a string, that's because mypy is trying to evaluate this type and at this point it does not exist yet. So in Python 3.7, future annotations work was added to prevent mypy from evaluating things at runtime. And starting in Python 3.9, that's a big usability boost for mypy. You don't have to import the collection types from the typing model, you could just use the built-in types. So here we had list that we imported from typing and in 3.9, we could just use list, same way we can use set and dip. And a big leap is planned for Python 3.10, where you no longer have to import from future annotations and this pipe syntax is added for unions. So I think that this is a pretty nice looking piece of code and I think that any developer can look at this code for a second and understand exactly what it does and what are the types and what he gets in return. So this is something to look out for in Python 3.10. So to understand how we can truly benefit from mypy, I think that it's best if we do some example. And we start with something to do and then we build on that and see how we can use different features of mypy to improve our solution. So let's say we have this function in our project and this function is used to charge a user. So the function is accepting an amount, currency and number of payments. And now you need to use this function. For the sake of discussion, let's say that you are a new developer and you are new to this project and you need to use this function. So if you're lucky, this function is very short and maybe it has a good doc string. But if you're not lucky, this function has no doc string and this function is very long and you need to be through it and figure out how to use it. So just by looking at this function, you can maybe guess that amount is float and maybe the currency is a string representing the currency and payments perhaps the number of payments. Maybe the person that developed this function thought that using float for money is not a good idea and he used an integer instead and maybe the currency is uppercase. Maybe the currency is just the sign and the payments is a list of installments, payments that I want to make. And we haven't even talked about this dysfunction returning anything, is it returning the charged amount, something else. So this is a problem. I mean, we spend most of our time reading code and not writing code, code reviews and when we use other people's code. So readability counts. Readability is important. So myPy and its syntax is like built-in doc strings. It makes the code much more readable. So we are going to start and improve our function to make it more usable and more safe step by step. So the first thing we can do is just do some simple annotations. So right away we can see that the amount is an integer, not a float. We don't need to guess. The currency is a string. The payments is an integer with the number of payments. And we get a dict that has a string keywords and the values can be of any type. So this is an improvement. But let's start and improve this even further. So what about the currency? We don't know what types the currency can accept. So to communicate both to the user and to myPy, what are the possible values for currency? We can use the literal type. So literal is a way to type, primitive type, in this case a string. And if you pass multiple values to it, you're basically saying that the currency is either USD or Euro. So that's an improvement. If someone tries to use this function and sends currency that we did not define in our currency type, he will get an helpful message saying that this type is a string. Because it is illegal because this is the value we expect. So we already improved the situation with the currency. But what about the return type? This is just a generic dict with keys and just about anything as a value. So one way to type dicts in a way that is more readable is using type dict. So instead of using a dict sdrne, we can now define this type. We call it charged and it extends type dict that we import from the typing model. Inside this type, we define the keys and the values. So now we have a type charged that is returned from the function charged. It contains the amount of the currency, a list of payments, and the amount of tax paid before we had none of this information. And now just by looking at the types, we can know exactly what to expect in return from this function. So that's a huge improvement. Now it can be a bit confusing and you might think that the charge is an object, but this is just the type annotation and what you get in return is just a dict. But you know what to expect. So a way to improve on that, if we want to actually make charge an object, you can use data classes. Data classes are working very nicely with mypy. So instead of the type dict before we change that to a data class from the data classes model. And we previously had some redundancy because the amount was the sum of the payments. So now that we have an actual object, we can turn amount into a calculated property. And except from that, nothing is changing. So that's great. Now let's consider the following scenario. What if someone is trying to charge $100 US in three payments and then tries to change the text, the VAT. This is pretty dangerous because we want to make sure that what we return is exactly what is being charged in the end. So can we prevent that? One way to prevent this type of use of data classes is to declare the data class as frozen. When you declare that the class is frozen, it tells Python basically that you cannot that all the properties of the class are read only. So when the user of this function will try to override the VAT, it will get this error. So we should be safe now, right? We prevented dangerous situations. Let's consider this situation. This time, we want to charge once again $100 US in three payments. But this time, the user of this function decided to change the first payment to zero. We can see that this is dangerous because if it passes this object onto another function, maybe a function that triggers an invoice or something, we can see that the calculated amount is now 66, which is not what we wanted to charge. So the reason frozen did not protect this case, protected us in this case is because we changed an element inside the list. The list itself remained the same because the list is eventually just a pointer. If we tried to change the entire payments list, we would get the same element before. But because we only changed one element inside the list, it's fine. So yes, this is bad. This is where one of my personal favorites features of MyPy comes in very, very handy, and that's immutable types. So MyPy comes with a set of immutable types, which are types that you can change, which are read only. So list set and tuple becomes sequence, and dict becomes a mapping. So if we just changed the list from before to a sequence, if we try to make this change again, MyPy will read the warnings, warnings saying that we can assign to a sequence because a sequence is read only. So this is much, much more safer. So you now have a great system. It's secure. You can make sure that the objects that you return from your function are not being abused. And as we know, systems are living creatures, and things are always changing and evolving. And now we need to support more than just successful charge. Okay? We now want to be able to support two different outcomes of charge. So before we adjust the successful charge, and now it's possible for the function to require additional processing. To communicate that to MyPy, we can use a union like we saw before. A union says that the function charge cannot return one of either charged or requires processing object. To make things a bit shorter, we can define our own type alias. We can call that union a charge response and use that to annotate the function. So if we want to handle a response from the charge function, we can declare this function called process charge. And let's see what it does. First, it accepts a response, which is a charge response object. This is the type alias we defined before. So if the instance is requires processing, we can prompt the user for action. If the instance is charged, we can send an invoice to the user. So that's great, right? But we already said that systems are evolving. So it's entirely possible that someone on the charge team will add another type of response. So we need to add some form of protection. So one way to add protection is using runtime checks. In this case, we can add an assert. And when you add an assert, this will fail at runtime. You can also add an exception. It will also fail in runtime. So if you're lucky, you have tests, and this will fail in the tests. If you're not so lucky, you will get an error in production. So is there a way that we can make myPy warn us about cases where we forgot to handle one of the options? This is called exhaustiveness checking. Basically, we want to get myPy to make sure that we exhausted all the options and warn us if we did not. So to understand how we can achieve exhaustiveness checking with myPy, we need to understand how myPy works. And that using what's called type narrowing. Type narrowing is the process myPy takes to try and further type of a variable at every stage of the program. So if you didn't fully understand, that's fine, because we are going to see an example. We are going to use a function from myPy called revealType. This function is used to ask myPy to show us what the type that he thinks a certain expression has at every point in the program. So at the beginning of the function, we ask myPy to tell us what is the type of the charge response. And myPy says that at this point, the revealType of charge response is either charged or requires processing. Now, this makes sense because we haven't done anything yet. If we check if the instance of charge response is requires processing, myPy is smart enough to understand that at this point, the type is requires processing. MyPy is also smart enough to understand different types of conditions. So if the variable is not an instance of requires processing, it must be an instance of charge. And if we handle both cases, then this becomes unreachable. Now can we use the fact that this is now unreachable and make myPy perform exhaustiveness checking for us? This is where a nice feature of myPy and it will hack that we stitch together. We call it a self-never. This is a special function that we define in our code and you can define in yours. It accepts a special type called no return, which communicates to myPy that something is wrong. Okay. Now, if we place this function in a place that is unreachable, then there is no issues and everything is fine. However, if this function is placed in a place that is reachable, in this case, we forgot to handle the charged case. And myPy will fail when he tries to evaluate this function and he will do that with a very helpful message. So in this case, we can see that we forgot to handle the charged case and myPy told us that he expected no return, but instead he got charged. So we can understand from that that we forgot to handle the charged case. So now if someone somewhere else in our company, our huge, huge, huge organization decided to add another type of charge response, we will fail at compile time. We can even get an error or a warning in our code as we type. So exhaustively checking using a self-never can be achieved on any enumeration type, union, liter, and enum. It's a very useful technique and I encourage you to use that in your code. So before we finish, I see that I only have like maybe one minute left. I want to help you decide if you should use myPy. So if your team is starting a new project, you should definitely use myPy. It will make your code more readable and it will be easier to collaborate because of that. You can write less trivial tests. You no longer need to test things like value errors and attribute errors because myPy does hold that for you at compile time. You can also be more productive because now you don't have to run the entire test suite in order to get an exception or an error. If your team is already working on a locked project and you're contemplating whether you can start using myPy and definitely yes, you can start using myPy gradually. A lot of large projects did that, Dropbox, MyPy, and Zoolog all did that and the experience of it all according to their blogs is very good. Now if you're working on a project by yourself and then yes, you should definitely use myPy as well because any code that you haven't worked on for more than a month might as well be written by a different developer. So this is it. No single takeaway from this talk should be myPy. Just do it. Seriously, just do it. So this is it. I have no more time left. My name is Khaki Benita and you can check out my blog. I write a lot about Python and Django and Postgres and SQL and performance. And I'm also pretty active on Twitter and I send an email to my mailing list around once a month. And you can also send me an email in this address if you have any comments or questions about this talk. So thank you all for listening to me and being with me in this talk. And thank you very much.
Mypy has been around since 2012, and in recent years its gaining wide spread adoption. As the framework continues to evolve and improve, more and more useful features are being added. In this talk I'm presenting some hidden gems in the type system you can use to make your code better and safer! The Mypy typing system, and the complementary extensions module, includes some powerful but lesser known features such immutable types and exhaustiveness checking. Using these advanced features, developers can declare more accurate types, get better warnings, produce better code and be more productive.
10.5446/53305 (DOI)
you Hello, Fosdame 2021 and welcome to this talk on code reloading techniques in Python. In this talk we'll be reviewing a few techniques that allow you to reload code and apply the new changes you made to your program. We'll look at a few different techniques to do so and we'll have an in-depth look at the inner way of how they work precisely. Few words about myself, my name is Hugo Herter, I'm a software engineer consultant. I discovered Python and Linux in 2003 and have been using them a lot since then. My first Fosdame was in 2004 and I think I attended almost every edition since. When I'm passionate about free software, I'm really happy this year to be able to attend all the dev rooms without missing any. When I was starting to learn Python, I was quite amazed by how easy it is to play with all the internals of the language and the constructs. One thing that I found pretty interesting is this exact function that allows you to execute any Python code finding a string that might come from anywhere. As you can see in the example on the right, that just executes Python code from the network and gives you a remote Python shell on any machine that runs this script. It's basically the same idea you have in Jupyter notebook with more security and more advanced features. When I was learning Python, I started to write my own web framework. This was back in the times when Flask and Django didn't exist. We only had Zope and a few frameworks that didn't exist anymore. On that web framework, I used a function called exec file which is the same as exec but for a file, it would execute all the Python code in a file. I used this to be able to make my changes appear immediately when I was changing some files in some web pages. Which brought me to this idea of code reloading. It's something that I have been playing with for a long time. I wanted to share this with you because there are many interesting techniques here and you might not know about all of them. What I called code reloading here is the process of replacing part of a program with a new version, part of all of it. I'm focused here on the source code because it's the term that's used mostly for interpreted languages. When you're using compiled languages, there are other terms that might mean similar things but they're slightly different. I talk about called reloading and what I mean by called is that you take the process and you stop it and then you restart it. The hot code reloading means that you keep the process running and you patch it with a new code without stopping it. As an illustration on the left, we have some kind of cold code reloading on a racetrack. You stop the car, you have access to all the internals, you can change everything you want but the car is out of the circuit, it's not running anymore. The driver is out of the car as well. On the right side, you have hot code reloading where the driver is still in the new car. You may not want to change everything. You don't have access to the chassis, changing the engine might be a complicated task here but you have access to quite a few pieces of the car already and if you just want to change the color or type the wheels, it's pretty easy to do. You don't need to stop the car and it goes much faster. It's a bit the same idea we have in programming with called, called reloading and hot code reloading. So called, called reloading, you stop the process and then you restart it again. It's easy, it's reliable. You've all done it if you did some Python code or any kind of programming, it's the default way of doing it. The issues you have with it is that you lose the states so getting that state back might take time. If you are programming a video game for example and you are a Vatorite in a special place and it took you some time to get there with certain enemies and you want to tweak the behavior of the enemies, in that case restarting the entry game every time might be pretty annoying and you would be interested in something that keeps the state of the whole program. The easiest way on Linux to do called, called reloading is control C up arrow enter to just run the same command again and it's a super easy way to do it because we all know these first shortcuts by reflex as we use it all the time in programming. Let's have a look at how some web frameworks do this called, do the, some web frameworks do code reloading and they used this code approach of restarting everything but they do it in an automated way. Let's have a look at how they do it precisely. The entry point is this function here run with reloader and you pass it a function. It will run this main function and enable the reloader on the side and stop that function if the code has changed to restart it. The first thing we see is that it's calling here single dot single six term lambda rx sys dot exit which means if the process receives the single six term to terminate from the system then it will exit to make sure that it doesn't hang if it receives this single. So this is a way to behave properly even in multi-threaded environments. Sometimes when you have multiple threads the signals are not received by all threads and you have to press control c a few times to stop it for example using some frameworks. Then we see that it's initializing a reloader here using this function get reloader which is defined a bit higher. There are two reloader classes in Django. One is the watchman which will watch for files on the file system and the other one is the stat reloader which will just watch every second if the properties of the files have changed and in that case say well the properties have changed so we should trigger a reload. The watchman is faster and more powerful but the stat reloader works as a fullback to this. And then it will pass this reloader as well as the main function here to this start Django function which is right here. So that function basically starts our main function here in a thread. So it creates a thread to run this main function. It sets it as a demon and starts the thread. So now our main function is running but in a thread not in the main thread but in a side thread which is controlled by this function. Then it starts the reloader and it passes the thread to that reloader class which will be in charge of stopping it and restarting it if something has changed. And it will run this in the loop as long as the reloader should not stop. So this is the Django approach. Start the main function in a thread and then look for changes on the file stem when they happen have this reloader class to just stop the thread and restart it. When looking at how Flask handles this reloading it's a bit more complicated because it's not within Flask, it's within WorkZook which is a web framework library used by Flask. But we can find something similar. We have this run with the reloader function that takes a main function as an argument. It does a register to the same signal as Django and then it starts a thread here with the main function and it launches the thread here. And if we look at the reloader it comes from these reloader loops and when looking at it we can see that it's also using something similar as stat reloader and a watchdog reloader and these are very similar to those used in Django. So we can assume that the behavior is identical even if the codebase is different here. So both Django and Flask use these watchman or watchdog reloaders under the hood. But how do these work? Well there is something called inotify on Linux and there are similar APIs on other other platforms that allow you or your process is to watch for file system events and receive a notification without having to constantly look if something has changed. On Linux it's inotify which you can use directly from the library pynotify if you're using Python or there is this library called watchdog that you can use on all main platforms. The way the interface to use watchdog looks like this, so you can create an observer that is in charge of receiving these signals and will run in a thread in the background and you can then schedule some handlers on it and say well I want to register for example recursively if you're looking for on a folder or not and then just start it and it will work using a callback based approach. Because it runs in the background I added this input at the end just to make the program block and to be able to see something before Python exits. Let's now look at hot code reloading. So in this case we want to keep the process running, we want to replace the code in memory. We hope it won't crash the program, this might happen if we have inconsistencies and the new code is not compatible with the existing one or the existing state and we want to take advantage of the fact that it keeps the state and it's really fast to do this. There are two challenges in this case, one is we need to find and load the new code because if you're just reloading everything you might as well restart the entire program and we need to replace the references. So in Python you can pass a lot of objects as variables and you can have references to these objects in many places and we need to find all these places to be able to make them use the new code instead of the old one. There are other languages that also allow this kind of hot code reloading. In Java for example you have this functionality called hot swap which basically allows you via the debugger to specify a class and ask the virtual machine to replace the class with the new compiled code from a class file. In C and C++ you have DLL code reloading that allows you to reload a dynamic linked library or shared library. In this case as well they need to share the same interface, they need to expose the same functions and classes and methods that the previous version did. There are some changes that you're not allowed to because then it would break the compatibility. In Python there are three ways of loading codes. One is eval that allows you to evaluate a function and that one is not very useful in our use case. However, the other two methods do work and they both have their advantages and disadvantages. The first one is the import module that you are using when you are importing a library and in a way similar to the DLL libraries you can reload a module that has already been loaded and have the new version replace the old one and exec allows you to execute just any Python code from string which can also be used in some cases. What you see here is on the left a text editor with some Python code and on the right a Python console. So the standard way to load Python code is using import. I will import my module and then I can call module.sayHello and it will just run it. If I change the source codes, say hello will still be at the old version as expected. There is however a library we can use with in Python to reload this module from import lib and we can reload here module. And now if we call our function again we can see that we have the new version. However if we have a reference to this function then it is a bit trickier because that reference will not be updated. Module.sayHello when we call say will still be calling this function. Now let's update the source code and reload the code. Module.sayHello has the new version. However say is still using the old version. So now we have one version that doesn't exist on the file system anymore, the previous version but it's still in memory, it's still loaded. And this is something we have to be careful about when we are doing hot code reloading. In practice however sometimes we are facing code that's not optimal for the reloading that we were just seeing. For example here we have a connection to a database that initializes a singleton. This is a pattern we see in a lot of code where that connection is created just within the module and so it's executed every time we import it. So the first time we import our module it will take some time to initialize it. We can now call our function. Say hello. Let's say we want to update the code of our function. We need to reload it so we can use importlib.reloadModule2. And again we have to pay the cost of the time to establish this connection. And now we have the new value working. So in this case this use of importlib.reload can be quite painful. Here we are just facing a delay but sometimes there are also threshold limits that are still appearing as busy. There are other issues that we can face that make this approach more difficult. If you can use it it's really nice because it's really simple. It's all built in Python and it's just one line. So go for it but it's not always optimal in some kind of complex software. There is however another approach we can use to reload this function. Say hello without going through the reloading of the database. And this is the idea here is to go get the new value of the source code and then to execute that using the function exec. So for the first step we need to get access to the source code of a function. And there is a tool for that in Python. In the library inspect. So I will just import inspect. And then I can use inspect.getSource of module 2.sayHello. I have the source code of the function. And the really cool thing here is that if I do add some code for example I add pass here and then I do a return here. And let's do some more changes. Let's do it. Call it again. And you can see here that this function gets source, gets the new value of the code and not the old one. So it's quite smart there which is really handy in our use case. So we now have the source code in a string and we could just try to execute it. However if we want to replace it within the module so that other functions that depend on it would be able to access it. We need to also give it access to things inside this module. For example db. If we execute this module here it will not have access to the db variable from that module. So what we need to do is use inspect to get access to the module of that function. So let's say module equals inspect.getModule of module2.sayHello. In this case we know which module it is. It's module2 so it will just get the same thing. But sometimes we just have the function around and we don't know directly from which module it is. So we can use this. And then what we can do is create, we want to be able to extract the new value of the function. So we'll create a local directory. I'll call it locals underscore equals dictionary sorry not directory. And then I can just execute the source code. So just copy this or copy the code here within the module that underscore and underscore addict which is a dictionary representing the namespace of what's within that module and my local addict here that I created just above. And now let's look at locals. We have say hello and if we look, compare it to module2.sayHello. You can see that they have the same identifier here. And if I call one and call the other. So this is the old one. And say hello. And this is the new one. So and the identifiers look similar but if we look here we can see they're not exactly the same. So let's not be mistaken by the fact that they look really close. Now we can just finally update this function if you want. So we can do module2.sayHello equals locals. And finally module2.sayHello. And there we have this. We just reloaded the function without reloading the connection to the database here. If you want to use hot code reloading in your projects and you don't want to write yourself the methods to do it using the tools we have seen previously, you can also use this library called reloader from reload their import auto reload. And here in this case you just decorate your functions with this auto reload decorator and it will automatically replace them using the proxy method we've seen above and the instance reference to the class method with the new code when the code changes by watching the file system. So this is a wrap up of all the methods we've seen previously. You can also manually specify when the code should be reloaded by changing the decorator. In this case you can manually reload the class or you can start a timer that will just reload it every second or again look at the file system and as the file system changes trigger the reloading of this class. It's on PyPuy so you can just install it using pip install the reloader and try start using it. The source code is pretty simple. It just fits in one file and then you have a directory with a few examples. Thanks for watching and join me in the question and answer matrix room if you want to discuss any things. You can find all the examples we've seen on this GitHub repository. Thank you. Thanks, Hugo, for your talk. So I think we can now start with the questions. First question is how does reloading using execs behave in terms of compiling to intermediate forms like PyC and so on? So it's using, Python internally is using bytecode so exec is a two steps process. The first step is it will compile it to bytecode. It will just not store that bytecode on disk and then it will execute that bytecode as like the rest of the bytecode. Okay, and are there examples of applications that use hot code reloading? Usually it's a process that at least I use for development. So it's not used that much in production because then it can cause a lot of issues but it's the hot code reloading in general is used a lot by game developers because they are tweaking the dynamics of the game while playing it and restarting the entire game every time you make a change to some logic doesn't make sense in that case. And how do you deal with side effects like things like shared resources and so on? So the idea with hot code reloading the way I presented it, you keep these resources on so you keep the state, you keep these resources. Of course if something changes outside of the scope of your changes then you may have compatibility issues and then you just have to accept it and restart the whole process. Okay, any further questions from the chat? What are the dangers that remain, could you fix them? Well, I don't think, I think Python itself is not designed for hot code reloading and other languages have allowed this in a safer way. So in order to make hot code reloading easier in Python, I think there will be some big changes within Python would be required. If you take the example of Erlang, that's a language that's designed to allow hot code reloading and it's built in the language in the tooling. If you take the example of Java, there is a rule you can reload a class as long as its interface does not change and your ID, your tooling will check that. In Python there are no such checks so at the moment there are no guarantees that the new code is running. So do you add the creators for reloading in your code base? Is there a best practice to ignore them at the moment running your code in production? I think it aligns with the other point that adding the creators just for the sake of reloading for half an hour doesn't make sense. It's a trade-off. I use the creators because that allows me to know exactly what is being hot reloaded and what is not. And also as a way to work with the references. Another strategy I thought about was to try to replace in memory all the references to the function with the new one within Python. And that requires much lower access to the internals of Python.
While iterating rapidly on Python code, we want to see the result of our changes rapidly. In this talk, we will review the different techniques available to reload Python code. We will see how they work and when each is the best fit. The talk will cover both cold and hot reload techniques: Cold reload techniques reset the application state between each reload. Examples include Django and Flask's autoreload tools. Hot reload techniques keep the application state despite the code changing. These include Jupyter kernels and 'reloadr' [1], an open-source module developed by the speaker to allow stateful hot code reloading.
10.5446/53306 (DOI)
Hello, my name is Sebastian. I hope you're having fun today. I know I sure am. Today, I'm going to, well, first I'm going to introduce myself, but after I'm done talking about myself, I will be talking about self in Python. And more specifically, I will talk about how Python inserts self into methods. To do that, we need to talk about something called descriptors, and we basically have to dive right into the middle of the Python data model. Don't worry, I'll start slowly and we'll build it up from there. Now, as said, my name is Sebastian. I live with my girlfriend in Reichweig, that's near the Hague in the Netherlands. And in my daily job, I'm a software engineer. I work for Ordina and more specifically for the Python years unit. And that means that I get to play with Python every day. And that's, that's pretty nice. Another big part of my life is Python Discord. I'm one of the three owners of Python Discord. And it's a large online community for Python enthusiasts. We have about 140,000 members. So I'm also really happy that we have about 100 volunteers to help us run the community. Now, if you're not familiar with Python Discord, we mainly focus on Python education, both for beginners and advanced users, but we also organize Python related events and maintain several open source projects. One thing we started doing recently is supporting other organizations in the Python ecosystem. And one specific example is that we help the Python software foundation with organizing the C Python core developers print a few months ago. Well, anyway, I think I've been talking about myself enough. So let's talk about Python and more specifically, let's talk about self in Python. When talking about self in Python, I think it's really important to go back to the basics, because there's a little bit of magic about self in Python, but you don't really see that magic anymore. If you've already been programming in Python for quite a while now, you get used to self, you get used to how it works and what it does for you. That means that you don't really notice anymore that self in Python doesn't really work like similar concepts in other languages. So let's go back to the basics and see what the magic of self is all about. To do that, I wrote a very simple class. It's a guitar class. And that's because I really like guitars. This one is really simple. Basically, if you have a guitar, you can give it a name and then the Dunder init method will assign that name as an attribute of the guitar instance. And if you're unfamiliar with classes, that means that you can do something like this. You can create a guitar, give it a name. In this case, it's Warwick streamer, which is just the name of my favorite bass. And then I assign the name Warwick to that guitar instance. And because we've assigned the name as an attribute of the instance, we can now do something like this, Warwick dot name. So we access the name attribute on this specific guitar. And we see that we get the name right back out. Now, this isn't really interesting. So I'm going to add a little method in there. It's a really simple method. It's a play note method. That's because the actual implementation isn't really important here. But anyway, what you can do with this method is you can play a note with one of the guitar instances you've created. So if I wanted to play a note with my Warwick, I could do Warwick dot play notes. So I'm accessing the I'm calling the method on this specific Warwick guitar. And then I pass in the note. In this case, it's C sharp, which to me is a musical note. It's not a programming language. And then if I hit enter, the message gets printed by Python, my Warwick streamer plays the note C sharp. And it doesn't really look all that exciting. But there is actually a little bit of magic going on here. And in order for us to see that magic, we have to zoom in on the function definition of plain notes. So if you look at the function definition here, and I'm calling it a function definition on purpose, not a math definition, because it's still a function. You can see that this function has two parameters in the parameter list. It has a self parameter, and it has a note parameter. Now, if a function has two parameters, it really needs to get two arguments when you call it. Otherwise, it won't work. You'll get an exception. Now, so this method, this function really needs to get two arguments in order to work. But if you now look at how we've called our method, you can see that we've only provided it with one argument, the musical note. Now, that musical note will end up here with the note parameter. But where's the value for self coming from? There's no value for self here. At least there's no obvious value for self here. So what's going to happen? Well, if you've been programming Python for a while, you probably know that Python is going to give you a value for self. More specifically, what it's going to do is it's going to insert the instance you called the method on. So in this case, we called the play note method on the Warwick base. So it's going to insert the Warwick base as the first argument into the function. So self will be equal to Warwick, to my Warwick base. That's why we could use the name of the Warwick base. But if you think about it, this is not really obvious. This is just something that you have to know that you have to get used to. And then you can start working with it. But there's no indication here of how it works, why it should happen in the first place. And in fact, a lot of beginners are a bit confused by this. But at the end of the day, this is what Python does for you. So it really looks like some kind of magical language feature that you just have to learn. And you can't really do anything like it yourself. It's just something that Python does for you. Well, in fact, that isn't true. Otherwise, this talk would end now. There's actually a well defined way in which this happens. You can actually use that way yourself as well. So that brings me to the central question in this talk. And that is, how does Python insert itself into methods? Now, in order for us to answer this question, we have to look at what happens if you define a function within a class. In part, that's because there's a common myth. And that says that as soon as you define a function within a class, it will turn into a method at definition time. That's not true. But there is something important happening there that that in part will explain what's going what what's actually going on. So let's get back to our guitar class. I've got it here. And now we have to think about what will happen if Python is going to define this guitar class. Now, first of all, Python needs to create this class. It needs to keep track of all the attributes and all the properties of this specific guitar class. And since everything in Python is an object, this class will also be an object. This is a little bit confusing for some people, but classes are in fact objects themselves. And in this case, I've drawn that with this little rectangle here just to keep track of stuff. So this is where it starts now. And now we're going to focus on what happens when Python gets to this section. This is a defined function statement. And we have to think about what happens when Python executes such a defined function statement. Well, from a very high level perspective, the perspective that we need to understand what's going on, Python basically does two things. The first thing is that it has to create that function. Now, since everything in Python is an object, so is a function. So if you create a function, you will actually get a function object floating around somewhere in memory. So this function object is an instance of the class function. And it keeps track of what of everything there is to know about this individual function, like the code that needs to be executed when you call this function, it keeps track of the parameter lists. If there are default values, this function object will track them. So everything there is to know about this function will be contained in this function instance. Now, in Python, we don't really have use for for objects that are just floating around somewhere in memory, we need a way to reach and use them. And that's why Python, we have to use them. And that's why Python will assign a name to the function in the current namespace. Now, that's really important here, because we're defining a class, so that namespace is the current class. And that means that as soon as we try to define a name, it will end up as a class attribute. So I've tried to indicate that here with this little label inside of the class object. And then Python will assign the class attribute to the function. What this means is that we can now use this class attribute guitar dot play note to access the function that's somewhere in memory. So let's demonstrate that if I do guitar dot play note, and I print the result, you'll see that this is just the function object. So function, guitar, the play note, and then some memory address. So importantly, this is still a function. So one of the important conclusions here is that this function object is still a regular function object. It's not it has not magically turned into method. It's not a method yet. Self hasn't been inserted. This is still a regular function. And in fact, if you were to call the function like this, so guitar dot play note, it really needs two arguments. Otherwise, you'll get an exception. Now, another thing to realize here is that it's really important that we've assigned a class attribute to that function object. And that's that that turns out to be quite essential to how methods work in Python. And to understand why that is, we're going to zoom in on that a little bit. So let's zoom in on the fact that we've assigned a class attribute to that function. And what we're specifically going to do, we're going to look at what happens if we now try to use this function as a as a method on an instance. So let's recreate my Warwick guitar. So this is the same thing as we did before, Warwick is equal to guitar with the name Warwick Streamer. And then think about what will happen if we try to use that play note attribute, that class attribute directly on this specific Warwick guitar. Now, why is it, why is this a little bit odd? Well, if you think about it, play note is a class attribute, it's an attribute of the guitar class. It's not an attribute of this specific guitar of this specific Warwick Streamer guitar. So why would this work? Warwick doesn't even have a play note attribute. It has a name attribute, it has some default attributes, but it doesn't have a play note attribute, because that belongs to the guitar class. Well, in fact, as it turns out, if you access such an attribute on an instance, Python doesn't only look at that instance, at that specific object, but it will look at the class of that object as well. So in this case, if we try to access play note, that's not an attribute on Warwick, but it is an attribute on the guitar class. So Python will still return a value for you. And we can show that, we can hit enter. What we see is that we indeed get a value back. But hey, that's odd. It's not a function anymore. We get a bound, something called a bound method back. As you can see, it's still based on the function. It's the bound method, guitar, guitar dot play note. But it's not the same function object anymore. So what's going on here? Well, as it turns out, attribute access in in Python isn't as straightforward as you may think. And in fact, you can objects can modify what happens when attributes are accessed. And that is precisely what function object objects do. What they do is whenever they notice that they've been accessed as an instance attribute, instead of a class attribute, they say, hey, we're being called as a method on an instance, we need to bind this specific instance to to ourselves to the function. And so that function object will then create a bound method by inserting that specific guitar already into the function for the self parameter. And then it will return that. And that is what a bound method literally means. It means that we've taken the function, we've bound it to the specific Warwick guitar, we've inserted Warwick already as the first argument. And then we have a bound method and everything we call it with after that will be additional arguments to that instance. So if we now call Warwick dot play note, we only have to provide one argument, because the instance itself has already been bound to the function simply by accessing the class attribute on the Warwick instance. So that's pretty neat. And basically, this is a really abstract explanation of how methods work. But it's a bit too abstract. So let's look at a more concrete example. So and to do that, we need to talk about something called descriptors. Now, what are descriptors? And what do they have to do with attribute access? Basically, descriptor objects can customize attribute lookup. So you're simply looking an attribute up attribute assignments. So trying to assign something to an attribute and attribute deletion when you do something like del object dot dot attribute. And the way they do that is that they can implement special methods from the Python data model, so called Dunder methods. And specifically, if they implement a get method, they can customize attribute lookup. So if we do Warwick dot play note, if the object we're accessing here has a get method that get method can influence or customize what happens when we try to do this. If the object has a set method, we can customize attribute assignments. So whenever we try to do Warwick dot play note is equal to a new value. If the object implements a set method, that set methods can now determine what will happen here. And finally, there's a delete method. And that and it customizes attribute deletion. So whenever you do del Warwick dot play note, instead of just executing this, if the object implements a delete method, that delete method will be called and will do the work instead. And what is the most important thing here? Well, functions are descriptors that implement such a get method. And so that's why we're going to focus on that get method right now by simply implementing one. So let's look at this guitar class again. To make a little bit of room here, I'm going to collapse the methods. So they're still there. You can assume they're still there. I've just made them smaller. And what I'm going to do is I'm going to add a new class attribute. And it's is my favorite. And then I'm going to add a new class attribute. And that is my favorite class attribute will be assigned to a favorite descriptor. Now, what is the idea here? If I create a guitar instance like my Warwick streamer, what is supposed to happen is that is that I can do Warwick is my favorite. And then it should return true or false based on whether or not this is my favorite guitar. And this needs to be taken the descriptor, because the descriptor has to check which name the guitar has. And in this case, because Warwick streamer is my favorite guitar, this should print true. Let's look at how we can do that. In order to do that, we need to write a new class, a favorite descriptor class. Since we're only dealing with attribute access here, and that is determined by a Dunder get method, this class only needs to implement a Dunder get method. It gets two arguments besides self. It's the instance. And that is the instance you're accessing the attribute on. So in the example above, that would be the Warwick guitar. And then there's the owner. And the owner is the owner class. And that has the attribute. So in this case, it would be guitar, because the descriptor is assigned to the guitar dot is my favorite attribute. Now, what do we need to do next? Well, this method, this get method will be called whenever we try to access that dot is my favorite attribute. And now we can check if the instance we're calling it on has Warwick streamer as the name. So I'm going to add an if statement. I'm using get attribute here to get the name attribute out of the instance. The reason I'm not doing instance dot name is really simple. I don't want this to break. In principle, there should not be guitar instances without a name, because the init methods always assigns that name attribute. But it could be that Python will pass none as the argument as the argument for instance. And that happens specifically when we try to access the is my favorite attributes directly on the class. So if you do guitar dot is my favorite, there is no instance yet. So the instance parameter in the get method will get none. And obviously none doesn't have a name attribute. So we need to be a bit careful here. So that's why you get attribute with a non default value. Then I can simply check if the name is equal to the Warwick streamer. If so, we return true. And if not, we return false. Now let's see that in action. So I create a Warwick guitar instance. I then access warwick dot is my favorite. This means that the Dunder get method will get called with Warwick, for instance, and guitar for owner. It then checks the name attribute. In this case, it's equal to Warwick streamer. So it will return true. So the result of accessing this attribute is determined by the descriptors get method. Now let's create another one of my guitars, which is a vendor, a vendor jazz base in this case. Now I'm going to access the vendor dot is my favorite attribute. This will call the Dunder get method again. But now with the vendor base as the instance, it will then get the name of the vendor base, which is not equal to Warwick streamer, which means it will return false. So as you can see, instead of just being a simple attribute is my favorite, it will actually call a method, the Dunder get method and whatever that method computes has as the return value, that will be the return value of the attributes. So now that we've seen a descriptor in action, I want to get back to functions. And in fact, functions also have such a get method. That's a little bit difficult to show you the actual implementation. But we can actually call that get method manually just to see what it does. And that's what I did here. And in order to do that, I need to access the original function objects. And if you remember, we can do that by accessing the function as a class attribute. So that's what I did here. So what I did is a guitar, a plain note that gives me the original function object, then we can call the Dunder get method on the original function objects. Well, a get method expects the instance. In this case, I pass in Warwick, and it expects the owner class. And I pass in guitar. Now, if I hit enter, if everything goes well, we should get a bound method. And indeed, that's what we get. So by calling the get method manually, we can actually bind the instance we pass in to the function to get a bound method. And this is also what happens if you access a method on an instance. Well, that brings us to the end of the talk. I have a short summary for you. And the most important thing of today is that the descriptor protocol allows you to customize how attributes work in Python. And the second thing is that functions implement the descriptor protocol to create bound methods. I don't really have any other important take home messages, except that you can do a lot of cool things with descriptors. There are also a few things that I haven't covered. But there are nonetheless important, I haven't given you any examples with done their set and done their delete, that's some purpose, because they only had 25 minutes. And I also did not explain the difference between data and non data descriptors. There's actually something interesting that happens as soon as you add a set or delete method. But it's kind of complicated. So I did not have time for that today as well. But I do want to recommend you to read the how to descriptor guide in the official documentation. It's written by Raymond Hettinger. It really explains these concepts really well. And another good recommendation is this book. It's fluent Python. It's my favorite Python book. And it also really, it really does a good job at explaining descriptors. It calls it overriding and non overriding descriptors. But it's all the same. Well, finally, I want you to be aware of a special done their method. It's called set name. And it really works well in combination with descriptors. Raymond Hettinger also talks about it in his how to descriptor guide. It's not yet in this version of fluent Python. But I've read that a new version, a new addition will come out in a few months. Well, that's it for me. Thanks for listening. I'll be here for the Q&A, so you can ask me questions and hopefully until until next year.
When someone starts learning about classes in Python, one of the first things they'll come across is "self" in the parameter list of a method. To keep it simple, it's usually explained that Python will automatically pass the current instance as the first argument to the method: "self" will refer to the instance the method was called on. This high-level explanation really helps with keeping the focus on learning the basics of classes, but it also side-steps what is really going on: It makes it sound like process of inserting "self" is something automagical that the language just does for you. In reality, the mechanism behind inserting self isn't magical at all and it's something you can very much play with yourself. In this intermediate level talk, Sebastiaan Zeeff will take you down into the heart of the Python data model to explain how the mechanism behind inserting "self" works. He will talk about the descriptor protocol and how it allows you to modify how attributes are accessed, assigned, or deleted in Python. He hopes that understanding how descriptors work will demystify "self" in Python.
10.5446/53307 (DOI)
Hi everyone, I'm Nitish and in this talk I'm going to tell you about how you can build data apps using Python. For this talk I'll be using an open source framework called Streamlit. Yeah, in this talk I'll tell you about my experiences building a couple of applications using Streamlit. So bit about my background. So I'm mostly a data engineer with experiences building like data pipelines and small services like APIs around it. I really like to prototype and like to add in hackathons building random stuff over the weekend. And I mostly use Python for my experiments. I live in Munich, I work for an e-commerce company and I also organize the PyData Munich chapter. I really love to travel and unfortunately I couldn't do much of travel last year and so this is also one of my use cases in this talk. And you can find me on most social media with the tag Nitishar. So coming to why use something in Python instead of JavaScript which is the most obvious solution for most of your visualization needs. Well JavaScript is really nice. It has lots of frameworks and libraries. But if you're not used to using any one of them regularly, it's not the easiest to get like a really good compact solution out. And especially if you go looking for solutions on the internet, you find solutions in each one of those libraries and it's a pain to choose which one among them. Like each one would work for you, it's up to you which one to pick. And so as I'm more comfortable with Python, I try to see if I can find a solution in Python. And that's where Streamlit comes into play. So Streamlit allows you to turn your Python scripts into web applications. And so it essentially allows you to turn any make any script into interactive apps. And you don't need any front end experience for it. And it's really easy to deploy. So for example, this is an image taken from their showcase. So this is a model. So this is the visualization of a model used for objective detection. And so you can see like what are the different objects detected in this frame by the model. So like you can also see filter based on thresholds, like timestamps, etc. And all this is quite easy to achieve in Streamlit. And so Streamlit supports most of the major frameworks or libraries like scikit-learn or C-Born or Matplotlib, pandas, TensorFlow, even LATIC. And it's quite easy to get started with them. Their documentation is quite good. And you can find examples for using each one of these libraries. So coming to my use cases. So my first use case is going to be creating an interface for machine learning as showcased on their website. And the next use case will be like the classical data visualization in this case using travel data. How do you create an interface for machine learning? So my previous workflow would have been like creating a model using Jupyter Notebook or Python like editors. And then I would create a small wrapper which would just wrap this model into an API using something like Flask or Fast API. And then I would write a small frontend using HTML, JavaScript, and CSS using some simple frameworks like Bootstrap. I mean, there is nothing wrong with this approach. It works quite well. The only issue is that if this needs to be protectionized or made into a proper application, I need more time for the design. Like you need someone to help you design this page properly and also some of the standardizations. And on the other hand, with Streamlit, it works kind of similar with the model building. So you build your model. And then instead of building an API and all these HTML, CSS, JavaScript components, you add like Streamlit components for the UI. And under the hoods, all these Streamlit components are rendered as React components. But you don't need to worry about this at all. So I would like to talk to you about my use case here. So my use case is going to be the ImageNet. So I'm taking a pre-trained ImageNet classification model. And I'm adding some Streamlit code which would help me to basically create this interface that you see on the right. Okay, coming to the demo. So let me just refresh this app. So this is running completely on my local machine. So there is an option to select the model that you like or the network that you like. And it allows you to drag and drop files or you can also select a file. So let me just select a file. So yeah, I'm just taking like a green Python image from the internet. And so right now, this model is this image is being sent to this model using this network. And then it tells me that, okay, this is a green snake with the probability of 96.58%. And you also have like this other predictions from this network, like green Mamba with like 3%. And similarly, you can also see it for other networks like VGG19. Yeah, here you can see that it's a green snake with 97% confidence. And similarly, you can see it for any other network for the image net model. So how does it work under the hood? So anytime one of these things change, so either one of these components, one of these selections here or the image or the image, this whole script is run again and you see like the outputs. So here you can see that for rest net, it's a green snake with 99.64% accuracy. I would also like to showcase like the code behind it, because it's not that complex. So as you saw right now, since I toggle this show code, the script is now running again. Yeah. So in this script, all the streamlet components are prefixed with this ST. And most of the code here is from the standard image recognition for image net. Yeah. So, I mean, these components are used to basically set the layout of the page, like how you should load the sidebar and like these icons and page title. Yeah. So this select box is what you use to get like a select box. And what else? Yeah. You also have a file uploader, which you can use to upload files. And in this case, you can also restrict the file sizes and types. Yeah. Yeah. And you can read the uploaded file using just read function. And here I just pass this image to the network. Yeah. And to display an image like you see here, the only thing which you need to do is like put it as an ST image and pass it like the image data. You can also set captions. It also has an option to write upon this data frame to the output by using a streamlet data frame component. Yeah. So this is this component. And yeah. So one thing which I would like to highlight here is this ST.cache. So it's actually a decorator. What this allows you to do is like improve the performance of your app. So with this ST.caching enabled, you would, this function is executed only if the inputs change or the function changes. So every time the output is output of this function is obtained, it's a hashed and stored internally in the memory. And whenever this thing changes, only then if it doesn't change, this function is not executed again. Yeah. And with that, what I really like about this is, so if I want to just get the basic recognition along with the confidence, you just need like around three lines of streamlet code. And this is quite similar to writing a command line parser, except that with this, with using streamlet, you get a much better interface. You can actually showcase this to people who are not technical or who are not that comfortable using command line interfaces. So my next use case is going to be some visualization. So there are lots of different ways in which you can visualize data. You can use Jupyter notebooks or you can use presentations. You can use, you can share scripts. You can use your own custom code to do your analysis. You can even create reports in tools like Tableau or Power BI. But what about using web apps? So that's what I'll explore in my use case. So previously I used to use Jupyter notebooks for my visualization. It works really well. I mean, you can write your own custom graphs and visualizations. You can also tell stories using these Jupyter notebooks. I mean, there are some problems with Jupyter notebooks though. Like for example, whenever you try to share this Jupyter notebook with someone, you face issues with either dependencies or the order in which these cells need to be run or some state, internal state, which is not properly documented. And if you like to hear a bit more about this, there's a really good talk from Joel Groos at JupyterCon, I think two years back, about why he doesn't like Jupyter notebooks. With that, I come to my use case. So as I mentioned before, I really like to travel. And I really like this feature in Google Maps, which allows you to visualize where you are at certain points of time. So all this is done using your location history stored on Google. And this is really cool. So I thought, is it possible for me to create it myself using Streamlight? And there are also third-party location visualizers available on the internet. So for example, this one allows you to create a heat map of all the locations that you have been to. And Google also allows you to get this data out of Google systems using the takeout service so you can get your location history and use them. So what I want to do here is I want to try to recreate this view if possible. One thing which I realized quite early on was that I cannot create like as fine-grained visualization as Google has, because with the raw data, you only get latitude and longitude data along with the timestamps. What I mean is that the places are not really marked. It's possible to get this data if you like using some third-party reverse GPS coding applications or services. However, it's quite expensive if you want to do it for like a huge data set like what I have here. So instead, I try to add some background information by trying to get some images from Flickr, which public images from Flickr based on the GPS coordinates. Flickr has an API which allows you to get images around a GPS coordinate. Also I want to do some classic analysis tools like histograms and heat maps. And so this is how it looks like. Yeah, but I'll just go directly into the demo. Yeah, so let me just refresh the app. So this is also running completely on my local machine. So as you can see, I have around 800K data points between 2013 and 2020. And it gives you like a map out of the box. You can zoom in and zoom out into the map, like see where you were, at what point of time. And you can also filter based on the time frame. And it automatically refreshes the app with the right points. And it also does the centering and zooming. Yeah. I mean, you can go like really deep into it, like see where you were, at what point. And this map is actually provided out of the box from Streamlit. The only thing which I did was I used like a map box components or tiles for this map for better performance, but you don't need to do it. By default it has open source tiles. And yeah, so let me also just try to plot like the histograms of how many places or where I was at what point of time. Yeah. So since this change, the whole script is now rerun. Yeah. So here you can see like the time frames in which I was traveling. Like for example, you can see that 2016 was quite a lot of travel and whereas 2020 is not that much because of the pandemic. Yeah. And you can see that I have hardly any data from 2013 and 2014. And you can also split this by months. And you can see like, for example, I was more active during the summer months. Yeah. And you can also see like what hours you have more activity. Like for example, evening is more active and things like that. So all these plots are actually a matplotlib plots. And I'm just using a Streamlit to render them or make them interactive. So if I change something, all these plots would also change. Yeah. Here you can see that in 20, so now since I have only 2017 to 2020, the data is like lower. Similarly, I also have an option to load images from these locations. So what this does is based on the locations that it finds between this time frame, it randomly finds images from Flickr and renders them. Yeah. So there is not a lot of logic in this. So images could be repeated. Yeah. Lots of things could go wrong here. Yeah. You could also increase or decrease the number of images that you want to show. And this would be loaded from Flickr. Yeah. Here you see that the images were repeated. Yeah. So another thing which I would like to show is like the heat map, which tells me like what's the frequency with which I visit some of these places. So this takes a bit of time as it's a bit resource intensive. For this heat map, I use a folium, an open source library, which allows me to create like a heat map visualization. However, this component or this library is not directly supported by Streamlit. So I use like a third party integration or community plugin, which allows me to use or like visualize a folium heat maps or folium maps. As you can see, like since I'm based in Munich, there's a high concentration around there. But yeah, you get the idea that, okay, so these are some of the places that I have been to with more frequency than others, for example. And so now the images again refresh because I changed something on the, because I asked it to load the heat map. Yeah. And so I want to disable these and just give you like a short glance at the code. This is a bit more involved than the previous code, but still like there are some interesting components to it. Like for example, this is the function that gets the images from Flickr. Yeah, what do I want to show? Yeah, so this is basically the selection and selection of the data like the timeline. And here is where I do all this filtering of data between these time slots. And if you just want to showcase the data on a map, the only thing which you need to do is like streamlit.map and provided a data frame with like latitude and longitude as some of the columns. It would plot these based on these latitude, longitude pairs. And yeah, so as I mentioned, so you can use piplot to plot like matplotlibplots. Like yeah, the heat map, so as I mentioned earlier, I use FOLIUM and I use, so I also use this FOLIUM static function, which is coming from a community component. Yeah. And this to showcase an image, you just need to pass it a list of URLs and you can specify like the how big you want the images to be loaded. Yeah. And yeah, so as we saw earlier, these are some of the things I showed in the demo. So you might be wondering, are there alternatives to doing this in Python? Yes, there are. For example, Vola is a library which tries to turn your Jupyter notebooks into web apps. It's pretty good. You can also have the classic options like bokeh or plotly. To be honest, I haven't used any of them too much. I've just checked Vola. I mean, if you are already comfortable using one of these frameworks, I would say stick with it unless you find something really useful in Streamlet. Just some summary of my observations. So what I really like about Streamlet is you can make any of your visualizations interactive. You can customize it however you like with Python code. The deployment is really easy. So you can actually run the script. So it's just Streamlet run Python file and on a server and expose the ports outside. Or you can also use Docker to basically deploy this. And the community is really active. The forum is really active. You can get responses pretty fast. And if I talk about some of the pain points with Streamlet, it's like it doesn't really have a built-in authentication mechanism. Although there are workarounds, so it's coming as part of their premium feature. But there are workarounds around it. So you could use like a CDN or a reverse proxy in front of your web app. Or there is also like an input field which you can use to authenticate the application. So there are workarounds around it. So it is possible. And also I didn't really have some luck trying to deploy this machine learning interface onto their Streamlet sharing platform. So this allows you to run your Streamlet apps on their server with a shareable link and all. It's quite promising, but it did work for me. But I think it's mostly down to the dependencies on the server. To conclude, I would say it's a great thing to have in your toolset for a data scientist or a data engineer or people who are not that familiar with JavaScript. It's great to showcase solutions. Like you can build something really fast in a couple of hours and then share it with the team and they get like instant feedback, especially people who might not be that technical. Yeah, and with that, I'm coming to the end of my talk. So there are some references so you can find these code examples on my GitHub repositories. And also there are some examples, applications on their gallery. And yeah, you can also find, you can also reach out to me if you have some questions. I can try to help out. Yeah, thanks. And now I'm open for questions. I think we're nearing the end. So I'm just going to... Okay. Thanks a lot for your talk. Now we're going to get a few questions. So the main question was, of course, comparing with how this is compared to Dash, Plotly, for example. Yeah. So yeah, I mean, first of all, I haven't used Dash or Plotly that much. So I've just tried a few experiments with it. So my understanding is that Dash or Plotly is more aimed at the enterprise for now. So you have much more control over how you want to basically arrange the components or have different themes for the components. So Streamlit is still quite early. So it still needs to work on this or they are working on this as far as I know, because they are also having a building an enterprise product which would allow you to have a bit more options. And also with Streamlit you can connect to Plotly. Okay. And are you tied to specific rendering library for your plots or can you use something like high charts or...? So I don't know explicitly about high charts. But I mean, so if some charts are not supported by Streamlit out of the box, you can actually build your own components for it. So they also have some documentation on how you can build custom components to render your charts. So it's basically some react code. Okay. Okay, thank you. And I think you mentioned in your slides, I was just wondering what were the options for limiting access to the application? Yeah. So one thing is, so the proper authentication is being built into their enterprise product. But for now there are workarounds, so you could actually have a password field. So and then do a matching in your code. So if the password matches, you can allow access. Or the other option would be to use some load balancers like Cloudflare or so and then authenticate it using that. Okay. Thank you. And do you think that Streamlit can be used for controlling experiments with GUIs like LoveView? LoveView? So I'm not actually quite familiar with it. Or do you mean the Vue.js? No, I think it's for controlling experiments or...? Okay. Okay, I see it now. I'm not completely sure about it. But yeah, I'm not sure about this lab view. Yeah, sorry. And do you think it would allow for more rich formatting of the content like image coordinates and content, etc.? So Streamlit by default doesn't do much of these like plots. So the plots you have to build your own using any of your frameworks. So and it supports all a lot of these frameworks by default like quite a bit. So if it is, and what Streamlit does is it allows you to make them interactive. That's what Streamlit basically does. So you need to build your own plots using your tools. All right. Thanks. Any further questions from the chat? Oh, how can you connect, for example, D3.js? But I believe it's the same. Like you build your own components, I presume. Yes. So you build your own components and then there's a documentation on Streamlit's website about how you can use D3 components in Streamlit. Yeah. Okay. Thanks. And how complicated is that to build your own components? Is it like you need to do your own JavaScripts or it's purely Python? Yeah. So you need to write your own JavaScript to build the components. Yeah. Okay. Thanks. I think we are close to the end. Any last question? In any case, if people are interested in keeping the conversation, the link to the hallway session is going to be posted right after the talk is over.
Have you always wanted a flexible & interactive visualization that is easy for others to work with without handling all the Javascript libraries? Or do you want to build a user interface for your Machine Learning Model? This talk has you covered with building data apps in Python using Streamlit with examples of a Travel Visualization App using Google Maps Data & an UI for the ImageNet Model. In this talk, I showcase couple of different use cases where you can build your data focussed applications using Streamlit, an open source library in pure Python. In the first use case, I cover how you can build interactive dashboards using different Streamlit components. These dashboards can be easily deployed & the consumers can easily work with the interactive dashboards without worrying about all the dependencies that need to be installed to run the Jupyter notebooks. In the showcase, I will go over how you can build a dashboard of your historical travels using Google Maps Location History including some memories from them from Flickr. In the second showcase, I will describe how users can create a quick interface for their machine learning model using Streamlit. These interfaces are much faster to develop than building a custom frontend interface for machine learning models with the help of Javascript libraries. In the demo, I showcase how I built an UI for the ImageNet Model. The showcases will showcase how these data based web apps can be built using Python functions & Streamlit components.
10.5446/53312 (DOI)
This presentation is about how new radio was ported to build route and how we applied the resulting application to develop an embedded digital network analyzer. This is a feedback on a graduate course intermediate between embedded electronics and radiofrequency digital signal processing software and radio. So the object we wish to analyze is dual surface acoustic wave resonator commercialized by the company Sensei or this is a temperature sensor with two resonators here. It's even though it's operating at 444 megahertz. It's a very compact device despite the 70 centimeter wavelength. Thanks to the slow acoustic velocity on positive materials. These are optical mapping where we see energy in one resonance or in the other resonance depending on which mode we're looking at. These are characterizations using a 20 kW rodentschwars network analyzer and this is what we aim at achieving using our Raspberry Pi and RTLSDR DVBT with the appropriate radiofrequency source. So the outline of the presentation will be the built-in framework, so a consistent environment for generating a Linux kernel, the libraries, a user space application and the tool chain. We'll see how new radio is running on the target system. We'll take the example of a Raspberry Pi because we needed to share one hardware platform with each student unable to attend university courses at the moment. So cost was an issue. We'll demonstrate first how FM broadcast can be received and how FM broadcast can be generated and finally how we can combine. So this is the workflow first building the built route environment on the PC which will be called the host, generating an SD card image that will be running the operating system on the platform, embedded platform which is called the target. The file system with new radio is running on the Raspberry Pi 4 and the flowchart will be generated using new radio companion on the host PC which will be transferred, SCP, NFS, whatever you wish, on the RPIs for execution. So here's an example, for example, of a setup running GNSS-SDR, the free open source GPS receiver, either fetching data from an RTL-SDR dongle or for B210. So you'll see that both LibuHD and Osmo-com are running on the Raspberry Pi. So all these hardware are supported with the benefit of the Raspberry Pi 4 with its USB 3 port allowing for streaming very fast data. In this practical example we'll show how we use the Raspberry Pi with RTL-SDR DVBT connected to the device under test and since we need to probe the device with some sort of energy source we'll try various energy sources. This is a broadband noise generator, we might as well use a pulse generator and we'll see at the end of the day that again for cost reasons we'll end up using Pi FM using one of the GPIO here streaming a radio frequency signal from the internal fractional PLL. Because we want to develop an embedded instrument this whole acquisition system will be controlled from a host PC and we'll be using a 0MQ for streaming the results of the measurement from the embedded target to the host PC. So first of all why focus on build route, why not just use one of the ready-made operating systems for the Raspberry Pi. First of all because we wish to teach embedded system electronics development and we're just running a general pulse operating system that not meets the requirement of embedded systems of optimizing resources, optimizing space, optimizing energy, optimizing speed and furthermore you can see here the performance gain on running a dedicated build route toolchain with respect to a general pulse pulse Rasband running on all Raspberry Pi and not benefiting from the exceptional performance of the Raspberry Pi 4. So if you look at the Valk this is Valk config results and Valk is the vectorized kernel so the single instruction multiple data instruction set and using this mandatory as was shown by Semucray and his computers running parallel instructions allows you to greatly increase computation speed and is very well matched to linear algebra where you manipulate the matrix and vectors and that's exactly what single instruction multiple data is you feel a vector and when you want to add two vectors you do the same operation on each element of a vector so single instruction multiple data all elements of a vector are operated the same way. So you see here at that Rasband with an on-demand CPU speed has no Valk optimization so they are not using the Neon SIMD instruction set and the speed is quite poor you see that with respect to some of our implementation using build route we have significant speed gain in most of the implementations here. For example you see that the generic implementation here takes 420 milliseconds here we have 254 milliseconds for this function here with 1.2 seconds with the Neon function so the benefit of having here we have 457 milliseconds with respect to 144 so the benefit is significant by having a dedicated toolchain that will use at best the functions of your CPU rather than a general purpose operating system. The Ubuntu guys did a bit of a better job because they have a much more recent Valk library that we need to upgrade on build route and actually this new Valk implementation even adds Neon support for this function here that was still a generic function here so there is improvement with build route over Rasband but if you use this Ubuntu system you get good performance and yet you have a huge system that definitely does not meet the requirement of embedded system if you consider embedded as PlutoSDR or these low footprint embedded systems. So optimizing resources of these low power systems requires minimizing the amount of memory to the application, minimizing the amount of storage to the binary that you wish to execute and this means never compile on the target always cross compile on this extremely powerful host computer your PC and transfer the cross compiled software to the target in our case it will be an ARM board and whatever you've learned here will be applicable to MIPS, Spark, RISC-5 whatever embedded platform you're running on. So the objective of build route is to provide a consistent set of Linux kernel, libraries, user space application, bootloader and cross combination tool chain and without this consistency you will face a lot of trouble with inconsistent library for respect to user space binaries and kernel. Now other frameworks will provide such a functionality you have to embed open embedded are such frameworks the main difference between build route and these frameworks is that build route will create a dedicated image for one particular application and will not compile anything else as opposed to Yopto open embedded which will compile a whole distribution and then you select which package you wish to install on your embedded system. So actually for a course with students it's questionable whether build route is optimum because here you would compile a whole distribution and students will select which application they wish to install but in the context of learning how to develop embedded systems we believe build route is better suited. From a hard disk storage perspective Yopto open embedded is typically 60 gigabytes, build route will demonstrate here with 8 gigabytes. So to get started with build route you just download the latest release and make sure that once you've configured your build route you don't rename the directory. Some of the directory names are hard coded in the make files and if you change the name of your directory you break the whole installation. So once you've installed build route you select in configs you've got all the supported boards including the Raspberry Pi 64 bit kernel so you make whatever board depth config you have you run make so that you build the whole image for this board. So this is where the very lengthy compilation step where all the archives are downloaded and compiled including the compiler possibly C++ Fortran compiler because you want your compilation environment to be independent on the host so you want build route to be fully self-contained so all these will be compiled for your host and your targets so the PC and the embedded board and at the end you get a single file which is the SD card image that you will dump to a mass storage medium in the case of Raspberry Pi it will be a SD card. Make sure that you check what is the name of your SD card and don't erase your hard disk when you do this step this will be irreversible so I always make sure with tail on DMSG or LSBLK LS blocks that I identify which is my mass storage medium that I will put in Raspberry Pi for executing the build route binary. Now we wish to control and transfer data to the between the Raspberry Pi and the host so we need some sort of ethernet interface so the easy way is to just modify the network interfaces on your SD card second partition you just change the network interface configuration and the interesting thing is well actually most of the time the system will not boot at the first trial if you're not familiar with the configuration so having a USB serial cable that will output a serial port is always easier if you want to debug and the interesting thing with running this lab tutorial with many students is you realize that now laptops no longer have ethernet ports you no longer have ethernet interfaces so you need to find a solution to make this system talk to your laptop with no ethernet interface so the USB3 the USB-C connector here also supports on the go configuration where you emulate ethernet over USB this functionality is not by default active in the build routes setup so to activate ethernet gadget over USB you must add in the first partition configuration the device tree overlay DWC2 that will activate the DTBO file that will activate the functionality of this on the go overlay for ethernet and you must probe the associated kernel functionality so probing DWC2 kernel module and activating the gadget ethernet functionality of your Linux function also if you want to SSH to your Raspberry Pi you need to add a SSH server on the Raspberry Pi drop there will do this for you and to add new functionalities in the build route directory you just run make many config that will give you the build route configuration menu and you select it's like in the VI you slash for select for searching the drop there in the package sections and again you make to recreate your SD card image and UDD once you've added your new functionality so most build route development is about make many config make and transferring the SD card image to your embedded board or flashing wherever firmware you've generated so build route is a bit of a challenge for, radio is a bit of a challenge for build route because you need quite a few exotic configurations with respect to the default configuration you need the GLPC C library you need to have EUdev to dynamically create devices when you insert your DVPT dongle we need Python free support which is not default supported by build route and all the other new radio functionalities that you wish to have Python support 0 and Q all this kind of stuff now if I wanted to go step by step I would like to add each one of these functions one after the other and we met a lot of difficulties by doing this because build routes cannot handle dependency changes if you take for example you compile new radio and then you say oh but I would have wanted to have Python support and you add Python support new rate the build routes will not realize that it has to recompile the whole new radio with the new Python support so every time you make a very such a large dependency change the safe way of doing things is to make clean before you make again and make clean will make you recreate the whole build route tree structure again so this is related to the fact that build routes is relying on K config for its configuration as do not attacks or ZIFI and K config is not able to detect these dependency changes so what we did is we provide on this website on this GitHub Raspberry Pi 64 new radio Dev config that you can store in your configs directory and this will from the beginning give you all the dependencies to have Python support 0 and Q and all these functionalities to get a running new radio on the Raspberry Pi 4 there is also a Raspberry Pi 3 if you wish to have a Dev config for these devices so you put your new configuration you make clean you make the new configuration and you make and you wait for a long time actually not such a long time because the download directory still has all the previous archive and now with new radio your hard disk storage requirements about 12GB and after you've compiled all the stuff you put your new SD card and with its SD card.img file that has been DD and if you go on the Raspberry Pi and you run Python 3 import radio you must be given a prompt with no warning no error meaning that Python support for new radio is working now this is all very nice but we would like to have some application for new radio so the first thing that we always do with new radio is to listen to the FM broadcast station and listening means having a sound card output so default build root configuration is to not activate the sound card if you want to activate the sound card as we discussed earlier for the USB on the go you have the config.txt in the first partition of your SD card you activate the device tree parameter of audio being on and if you look at the boot messages of your Linux system you will see that the sound card has been activated you can check this using the ASA tools with speaker test that will output a tone on the jack 2.5mm audio jack. Now this is just to check that the sound card is working and what we want to do is to run the first flowchart from new radio companion now in this approach of deeply embedded systems we do not wish to have a graphical user interface on the embedded system so what we're going to do is create the flowchart on our host PC this will of course be a no graphical user interface flowchart since there is no graphical user interface on the Raspberry Pi and once we created the Python script we will transfer the Python script from the host to the target and execute on the target so to do this we'll need to have new radio 3.8 on the host on the PC because Debian stable is still shipping the new radio 3.7 and because pyphomes is working so well I would recommend compiling from scratch a new radio 3.8 using pybons it's always instructive and useful to have a clean new radio environment and once you've created your Python script you just transfer to the Raspberry Pi be careful that your audio sync must be stereo the Raspberry Pi will not accept mono input so here for example to test everything we have a signal source with our 440 hertz signal source 48 kilohertz audio sync and we have a stereo no GUI flowchart and we run this on the Raspberry Pi now we solved the demonstration that we could get the Raspberry Pi working so what do we want to do now we want to get the RTLSDR dongle working and we want to check that we can control the setup of new radio from the PC host using a Python script so the sound card now that we check that the sound card is working we can just replace the signal source with an osmo comm source with the dvbt here and same story we select a single FM channel by having a low pass filter and we demodulate the wideband FM the output audio will allow you to listen to audio station but in our embedded network analyzer we don't want to listen to the output of a network analyzer we want to stream it to the PC so we replace the audio sync again with a no graphical user interface on the target on the Raspberry Pi with a zero MQ sync and the zero MQ sync is referred to with a TCP address of Raspberry Pi any port above 1024 on the host PC we have a zero MQ source which feeds the audio sync of course you have to be consistent in the data rate again you put here the Raspberry Pi IP address and this will use your laptop PC as a sound card the fancy sound card but in this case we're just streaming the audio so all the fancy processing is done of the Raspberry Pi the complex processing of FM demodulation is done of the Raspberry Pi and you have limited bandwidth communication over Ethernet just to stream the 48 kilohertz audio whereas here the IQ data stream was 48,000 times 24 to have enough bandwidth for the demodulation so this means we can stream data from the Raspberry Pi to the PC now we would like to control the Raspberry Pi from the PC so the other way around sending commands from the PC to the Raspberry Pi and to do this we'll use the Python script that can be included in the Python that was generated by the radio companion that's the functionality of the Python snippet and this Python snippet will allow us to initialize a thread that is included in a Python module and this thread will be running a TCP server so a thread is just a piece of software that is running in parallel so we have the main thread running the new radio scheduler and the parallel thread that is running the TCP server and the TCP server will be receiving commands from the client on the laptop on the PC and these commands will call the callback function and these callback functions will allow us to change the parameters on the new radio flow graph so this is with a server that we're going to run next to the new radio scheduler on the Raspberry Pi connect and we accept the we listen to the connections and when the PC connects we will receive commands that allow us to call the callback function and change the parameters of the new radio flow graph so this allows us to tune the radio frequency from 10 local oscillator as commands that were transferred from TCP IP client to server and to stream the 0MQ stream from the Raspberry Pi to the PC so this is the practical demonstration with in this case we have the demonstration of how you include your Python server as a Python module that is initialized using the Python snippet and you can check that actually when you run in this case I'm just running a telnet from the laptop to the Raspberry Pi and this telnet allows me to change the parameters indeed the callback function that tune the settings of my new radio flow graph are called and we see indeed that the frequency of the carrier that is sent here is changing with in response to the commands I sent from the laptop to the Raspberry Pi so we solve the problem of collecting data feeding the Raspberry Pi sending the data to the PC and controlling the PC from the PC the Raspberry Pi but now we still need to prove the device under test and to have a radio frequency source now I started this presentation by saying we would be using a noise source but I had the difficulty assembling as many noise sources as there were students in this course so I needed to find a way of generating a signal easily so I wanted to make a pulse generator and if I wanted to make a pulse generator I needed a way of controlling this of these pulses and I would be using the PWM of this modulation of the Raspberry Pi so this is what I did here I used one of these fast comparator from my analog device it's an ADCMP 5773 that converts the slow rising edge of the GPIO in a very sharp rising edge less than one nanosecond so basically one gigahertz bandwidth however again I had the difficulties assembling enough boards for all the students and when I was trying to trigger this fast comparator I realized that I needed the PWM the pulse with modulation of the Raspberry Pi and actually the Raspberry Pi PWM is used for Pi FM for generating an FM signal from the Raspberry Pi so this is the output of the noise source this is the output of a pulse generator and the problem is that if your pulses are too far apart then you create a comb for a transform of a comb in the time domain of a comb in the frequency domain with spacing inverse of the time spacing of your pulses and so if your pulses are too far apart you get a comb with all these unwanted features in your spectrum that will prevent you from detecting the resonances of the acoustic device so let's use the Pi FM project Pi FM is using the fractional PLL it's a 12-bit integer 12-bit fractional part that is on the Raspberry Pi and by reprogramming periodically using the direct memory access function of the PWM you can update often enough the frequency of your fractional PLL to have an FM output and you can demonstrate that the frequency resolution at an output that's about 90 megahertz is about 2.7 kilohertz that's for a fractional PLL of 12-bit and we want to probe a device at 44 megahertz but you are told in the literature that the output of Pi FM is below 150 megahertz more or less so what we're going to do is use the fifth overtone of the Pi FM output to probe the SO device so there are many implementations of Pi FM I will not go through all of these the one I found easiest to read is this implementation here which you can compile simply by replacing in his make file a cc of your host computer with armlinux gcc from your build route compiler the one that we're going to use is the archive librp itx from Everest from this GitHub library here and because these are CMake you can call the CMake configuration for cross-compiling CMake tools by using the CMake rules that are provided by build route so when you do this you see that you emit a signal in the FM band and be sure that you never emit this over the air because you've got all these ugly overtones as usual with SDR you've got the main emission that you've got all the overtone third overtone fifth overtone and what you see here actually is that fifth overtone is quite powerful and if we slightly tune the FM emission we see that we're changing the emission at 444 so this means that Pi FM can be adapted to probe our surface acoustic wave device so the first thing you can try is to feed the output of your awesome SDR source into the 0mq sync after FM demodulation and feed this as we've seen earlier and this allows you to listen to Pi FM using your Raspberry Pi as a receiver so what you see here is the Raspberry Pi the DVBT of course we never want to emit Pi FM over the air so we are starting new radio this is a screen you see that new radio is tuned to 87 megahertz here we have a set of attenuators that feeds the the DVBT dongle and now we start running Pi FM and we hear the message from Pi FM RDS that is received by the Raspberry Pi this is the spectrum that has been received and because we're also emitting on the fifth overtone we'll be able to listen to the same signal by tuning the DVBT dongle not to the higher frequency of 87 megahertz but to the fifth overtone so here we change the carrier frequency multiplied by five and again we can listen to the FM broadcast on the fifth overtone and again it works very well this is a demonstration of listening to Pi FM RDS using the Raspberry Pi now that we've done this we can modify Pi FM RDS this is left as an exercise or you can use the chirp function from Ivaris repository and you can sweep the frequency using the GPIO output again stream the data to the PC and you can display the spectrum and here you've got the two dips associated with the transfer function of our network analyzer so we were able to detect the two resonances by only using the Raspberry Pi as a radio frequency source and the RTLSDR DVBT dongle as a receiver so as a conclusion we try to show how new radio ported to build route allows us to do so many applications and actually this is applicable not only to the Raspberry Pi but to SDR the red pitaya all these ZINN based boards here is an example of the STM32MP1 board from ST Microelectronics with its screen and new radio running actually this one even has a mouse that allows you to probe the cursor value this means that software defined radio is well adapted to a single board computer execution we did not address in this presentation how you could use non-official packages we ported GNSSSDR as well to this non-official br2 external mechanism and if you want to read further about embedded Linux and build route these are a few references of course every year there are presentation at FOSDEM and for the French reading audience here is a reference about embedded Linux written by pfissue and others and with a few seconds I have left please allow me to make a quick advertisement for the next edition of the European New Radio Conference that will be hailed in Poitiers, France June 24th, 26th, 2021 first day Friday will be dedicated to all presentation and tutorials in Poitiers Saturday 26th will be a joint presentation with SDRA in Friedrichshafen in Germany so if you enjoy using new radio and want to share your experience with others feel free to consider joining thank you for your attention
Embedded systems are tailored to a specific task aimed at minimizing resource and energy consumption (e.g. ADi PlutoSDR). Cross-compiling benefits from powerful personal computer computational resources and user-friendly interfaces while removing the burden on the embedded board of running the compiler. GNU Radio was ported to Buildroot to provide SDR enthusiasts access to the many boards supported by this cross-compilation framework. We demonstrate its use in a graduate course project aimed at developing an embedded network analyzer. A network analyzer for characterizing a radiofrequency device requires a radiofrequency receiver for collecting the signal that was generated to probe the response of a Device Under Test, and a matching signal source. We consider the RTL-SDR dongle as the receiver, while the Raspberry Pi processor Phase Locked Loop (PLL) has been shown to generate a radiofrequency signal in the FM band. In this demonstration, PiFM is used as a signal source. As students were not allowed to visit university during lockdown, a cost-effective solution had to be found to provide hardware to all students to complete the course at home: the solution of a Raspberry Pi4 and DVB-T dongle was selected to provide the framework of embedded radiofrequency system development. GNU Radio is cross-compiled using Buildroot to the Raspberry Pi 4, iterative tests allow for checking the functionality of each step, until a complete measurement is achieved.
10.5446/53316 (DOI)
Hello, my name is David Sorber and I'm going to be talking to you today about a project I'm working on to improve the GNU radio accelerator device data flow. As I'm sure many of you are aware, GNU radio is a software-defined radio framework that uses a block-based interface and includes a library of very nice signal processing blocks that can be interconnected in various ways to create signal processing photographs. The image you see there is an example flow graph that I pulled off the GNU radio website. While GNU radio is very nice, one thing that it does not explicitly support are accelerator devices. When I say accelerator devices, I'm talking about things like GPUs, FPGAs, DSPs, and really any other kind of hardware that you might use for signal processing computation. If you're like me, you may have worked around this limitation by creating your own signal processing, accelerated signal processing block using the block interface. If you did that, you probably noticed the same thing I did, which is that the efficiency of the data transfer into and out of the accelerator device is really rather suboptimal. When I say suboptimal, here's specifically what I'm talking about. Many if not most accelerator devices require a dedicated memory buffer to transfer data into and out of the device. These buffers are frequently known as DMA buffers. GNU radio manages its own buffers, so in order to use a device's DMA buffers, you have to copy from the GNU radio buffer into the device's input buffer, for example, and then do the opposite on the output path, copy data from the device's output DMA buffer back into the GNU radio buffer. This is an example of what's known as the dreaded double copy problem. A better way to interface with accelerated devices would be to remove the double copy and allow for the GNU radio framework to directly manipulate device DMA buffers, and that's exactly the purpose of the project that I'm working on. To start, I'd like to talk a little bit more about some background so that you can better understand the details of this project. As I mentioned before, and I'm sure most of you are aware, GNU radio blocks are interconnected to create flow graphs. Under the hood, when blocks are interconnected, the block that's writing output allocates a buffer, and of course, writes to that buffer, and the downstream block that's reading or consuming data from that buffer does so using a buffer reader object that's indicated there in blue. The interconnection between two blocks is represented by a buffer that controls the data moving from one block to the other block. And of course, one upstream block can feed multiple downstream blocks. Here I've shown two in this diagram, but depending on the blocks involved, it can be many more than two. And again, the buffering scheme is fairly similar. The block that's producing output allocates a buffer and writes to it, and the blocks that are consuming the output each have individual read pointers that keep track of their location within that buffer. Another interesting fact about the GNU radio buffers is that they are circular. And I'm talking specifically about the VM CircBuff class, of which there are several implementations that all work in the same general way. And they are very clever circular buffers in that they use a double mapping scheme. So if you see here, the dotted line across the screen represents the boundary between physical and virtual memory. In physical memory, there's a single buffer that's allocated, and in virtual memory, two mappings are created. We'll call them mapping zero and mapping one. And those mappings are placed back to back. What's returned to the user, however, is only the first mapping. So only mapping zero is accessible to the user. However, because mapping one exists, it allows data to be written and automatically wrap around the boundary of the buffer. So for example, if you're starting out at the end of mapping zero and writing across into mapping one, because mapping one is directly behind mapping zero, that has the effect of writing from the end of the buffer directly back to the beginning of the buffer. As I said, this is a very clever and elegant way of handling a doubly mapped buffer. And all of the existing buffers in GNU radio work in this way. So now I'm going to talk about the goals and the overall high level plan for this development project that I'm working on. So one of the most important goals is to preserve compatibility with the existing set of GNU radio blocks. And this is, of course, very important because there's many of them, both that come with GNU radio and as part of various out of tree modules. That's certainly the most important goal here is to preserve compatibility. The next goal is sort of a design goal, which is to create a hardware agnostic interface to allow users to define and use custom buffers. And those custom buffers would typically be things like DMA buffers that I was talking about earlier. Another key point here is that the goal is very much to make this hardware agnostic, not to create a specific interface for any specific type of hardware, but to create a more general one that can be used with lots of different kinds of hardware. And of course, another goal here is to not work in isolation, but to collaborate with the GNU radio development team, which I have been doing. And of course, with the ultimate goal of upstreaming these changes back into GNU radio. In terms of my high level development plan, I've sort of broken it up into two milestones just to make things more manageable. So the first milestone that I'm just calling milestone one is to implement that custom buffer interface that eliminates the double copy problem. And then milestone two is a more forward thinking thing to extend that same interface to support device to device transfers. And I'll talk a little bit about milestone two later on. So milestone one is what I'm currently working on. I started last year and in early December, I posted what I'm calling my first draft of the implementation to the NG sketch GitHub repository at the URL on your screen. I am currently working on the next draft as of the time that this video is created. Those changes are not yet posted in the GitHub repository, but I hope that by the time you're seeing this or shortly thereafter, they will be available. And next, I'm going to talk a little bit more in detail about what milestone one involves, which is first to refactor the existing buffer interface and create the abstraction for single map buffers, then to create an interface for blocks to allocate custom buffers. And then of course, to test the single map buffers, test the performance because that's of course the whole point of this project. And then furthermore to test with many different kinds of hardware to make sure that it's compatible with many different types of hardware and their associated runtimes. So to start off with NG New Radio, there is a buffer class that has the interface presented in the box on the right there. And I started by making that buffer class an abstract class and then breaking it into two pieces. So the two pieces are the buffer double mapped, which represents what the buffer class was originally as an interface to the VM Circular Buffer that I mentioned earlier. And the newly created buffer single mapped, which is the new abstraction for single map buffers. And the idea here is that the single map buffers will wrap a custom buffer. One nice thing to note is that the single map buffer implementation can be tested just with regular character array buffers. They don't actually have to be any kind of special buffer. And now the make buffer factory is actually more of a factory because depending on your selection it now allocates one of these two different classes. I also did something very similar for the buffer reader class. There's only one important function in this interface. So I kept the buffer reader class as is and made the items available function virtual. I then created the buffer reader SM for single mapped class that overrides the items available function. And again, the buffer reader, sorry, buffer add reader function is now more of a factory and that it chooses which of these classes to instantiate. Another important detail for this project to keep in mind is that accelerated blocks may need to allocate their upstream buffers in addition to their downstream buffers. As you noticed earlier, the device needed an input DMA buffer and an output DMA buffer. Also in the GNU radio world that would be an upstream buffer and a downstream buffer, which is to say a buffer that's being read from and a buffer that's being written to. So I also added an interface to allow this to happen. Basically only when it needs to happen, which is to say only when an accelerated block is present, there's now a replace buffer function that will allow that accelerated block to replace its upstream buffer with a custom buffer that it can use. Now I'm going to talk a little more in a little more detail about the single mapped buffer. So one of the key differences between a single mapped buffer and a double mapped buffer is that the single mapped buffer has to manage wrapping around the end of the buffer explicitly. As I mentioned with the double mapped buffer, it's very elegant implementation means that you don't have to explicitly worry about the wrapping case, but single mapped buffers of course are not worked differently and you do have to worry about that case. Another thing that comes into play here is that the size alignment between the producer and the consumer, which is to say the block that's writing and reading becomes important. For many simple cases, the writer is writing in increments of one and the reader is reading in increments of one and everything works out very well. In more complicated scenarios, those read and write granularities may not be one and that leads to different problems, especially with regard to wrapping on a single mapped buffer. So for the first draft of this project that I was working on, I attempted to determine the read and write granularities and size the buffer such that they would never get stuck so that wrapping could be handled automatically without any blocking. However, that proved to be very difficult and in fact, eventually I determined it was impossible with the current interfaces. So I took a new approach and added some functions to explicitly handle the cases where the input or the output might get blocked. So this is in the next draft of these changes. So first off, I make a reasonable attempt at size alignment, although it may not be perfect. And then like I said, I added explicit callbacks, output blocked callback to the buffer class and an input blocked callback to the buffer reader class to handle those two cases. So the output blocked case would look something like this. So consider a case where your write granularity is three and your read granularity is one. And in this case, you can see the write pointers at the end of the single map buffer and it only has space to write two items, but it needs to write three. So it's blocked. So the idea here is that you would find the lowest read pointer that may still have data to read and copy that data back to the beginning of the buffer and then reset the read and write pointers. And as you can see, the read pointer gets reset to zero in this case. And then the write pointer is now repositioned so that it can continue writing. Now is unblocked. Similarly the input blocked case is slightly more complicated, but ultimately very similar. So if you consider this case where the write granularity is one and the read granularity is three, here the read pointer is at the end of the buffer and it has two items until it hits the end of the buffer. Although if you look back at the beginning of the buffer, there's now plenty more items. So it could actually continue reading if it weren't for the fact that it's blocked at the end of the buffer. So the idea here is that the pending write or read data at the beginning of the buffer gets moved down and then the read data at the end of the buffer gets moved back to the beginning of the buffer. And again, the pointers get adjusted. So the read pointer ends up back at the beginning of the buffer and the write pointer gets moved down to the next available position. And now the input becomes unblocked. So the current status of these changes is that testing is ongoing. What I've done is temporarily modify the GNU radio runtime so that all buffers are allocated using the single map buffer implementation. And then I've been running the existing GNU radio tests. And those present a lot of interesting cases that I've been working through. I've got most of them worked out, but there's still a few that are giving me trouble that still need to be resolved. So that's where things are at right now. And like I said, once those issues are resolved, then the next draft will be available. In addition to the single mapped buffer implementation, there's also another interface that allows blocks to actually allocate custom buffers. I haven't put a lot of detail into this presentation because the interface that I created for the first draft is going to be revised for the second draft. So I will mention what I did, but I'm not going to go into too much detail because it's going to change. So what I did is I created three virtual functions and added them to the block class. And by virtue of the fact that they're virtual functions, that means that that didn't break anything. And the three functions do three very simple things. So the first one returns the type of buffer, which is used by the setup functions to determine if a buffer needs to be allocated. The next one is the function that actually allocates the buffer, and the third is the function that frees the buffer. So this is a very simple interface, but I knew upfront that it lacked flexibility, and that was kind of the design tradeoff there. So I did receive some very good feedback on this first draft, and basically that feedback acknowledged that it was a simple interface, but did ask for more flexibility, which I agree with. So I'm going to be working on the next draft to hopefully balance these two ideas and keep a simple interface while adding additional flexibility. So I will mention, for Milestone 2, this is the sort of the next evolution of the same concept, which is to support back-to-back accelerated blocks and allow data to be transferred between them without moving that data back to the host. So as you can see here, and again I'm just going to say as a standard disclaimer, this assumes that both of these blocks are on the same device, that's important of course, but if you were to have two blocks back-to-back like this, in theory they could exchange their data without actually moving the data back to the host, and that's really the whole purpose of Milestone 1 is to add support for device-to-device transfers. So I will say with the changes in Milestone 1, this type of flow graph will be supported, but going between the two accelerated blocks, the data will go back to the host and then back out to the device. And again, that's just a limitation of Milestone 1, and eventually the changes in Milestone 2 will eliminate that. And again, this is a very desirable feature that will have many benefits, one of which is that it will allow people to create accelerated blocks that are more modular. Many of the accelerated blocks right now because of this exact problem are very, very complicated things rather than being much more simple modular things that could be pieced together. So Milestone 2 is coming in the future. So that about takes up all my time. Thank you for listening. My name is David Sorber, and the changes that I mentioned are in the GitHub repository at the URL on your screen. Just as a brief recap, the work on Milestone 1 is ongoing, and work on Milestone 2 will be beginning later this year. Thank you for your time.
Accelerator devices such as GPUs, FPGAs, or DSPs can be very useful for offloading computationally intensive digital signal processing tasks. Unfortunately, the GNU Radio SDR framework does not directly support such devices. Many workarounds have been developed to allow accelerator devices to be used within GNU Radio, but each comes with performance and/or flexibility tradeoffs. To solve these problems work is currently underway to develop generic support for accelerator devices within GNU Radio itself. The focus of this work is to modify GNU Radio to allow support for custom buffers. Custom buffer support will allow GNU Radio to directly utilize device specific buffers (e.g. DMA buffers) and therefore eliminate the need to double copy in order to move data into and out of an accelerator device. Furthermore, the custom buffer concept can be extended to allow “zero copy” data access between two kernels on the same accelerator device. This presentation will cover the design and current status of custom buffer support for GNU Radio.
10.5446/53317 (DOI)
Hello everyone, this is Xianjun from OpenWiFi Project. OpenWiFi is an open source Wi-Fi chip design that was announced in 4STEM 2020. This year we are here again. Nice to see you all. Although I cannot see you actually. Anyway, let's get started. We are a small software-defined radio group in the wireless group of IDLab, iMac, and University, Belgium. This is the agenda of my presentation. First, the OpenWiFi Project progress in the first year after online will be introduced. Then with some further explanations and highlights. Next, I will talk about our community growth situation. Then is the idea of low-cost hardware, which I believe will boost our community a lot. The last one is the roadmap of 2021. Come on, some of you might think this presentation is not as interesting as last year, right? No worry, let me show you something cool. I remind you, this is not a joke. I am talkin s on. Okay, that's all for today. Thank you. Any questions? I'm joking. Let's get started the boring part. And you can prepare questions meanwhile. The first thing is about how do we decide the release name, Gant Taiyuan and Luwen. Actually before the project online at the end of 2019, we three developers agreed that for each release, a developer will pick a name of a place on the earth to assign it as the code name, wait on the earth or in the universe. I can't remember clearly, maybe it should be in the universe. After all Elon Musk is going to live on the Mars. But the first three releases will be named by me, Wei and Mihao, which is the order of joining the project. I create the project at first, then Wei helped me a lot on Open OFDM project porting and testing and many other stuff. Mihao wrote the OFDM transmitter and also did lots of work to enable the 11N feature. So the first release name, Hent is chosen by me. Gant is the beautiful city in Belgium where we work. The open Wi-Fi was born in Technology Park, Gant, where our office is located. The second release name, Taiyuan is chosen by Wei. Taiyuan is her hometown and also the capital of Shanxi province of China. The third release name, Luwen is chosen by Mihao because he lives in Luwen, which is another beautiful city in Belgium and close to Brussels. The future release will be decided by the developer who contributes the most to the release. This will show respect to the people who actually did the job. Talk is cheap, show me the code. Next highlight, CSI. CSI means channel state information. For physical layer researcher, it's also called frequency response or channel transformation function. Anyhow, the CSI, I believe, is available in all the Wi-Fi chips, but few of them expose the CSI to you. As Open Wi-Fi is designed by us, so we can expose CSI definitely to external environment. For instance, now you can access CSI very easily on Open Wi-Fi via our step channel. Actually, we can expose more than CSI. So here, I can call CSI chip state information. A typical usage of the CSI is to detect the object in the environment. For instance, human movement, human gesture, people falling down. For instance, in this picture, and the link is here, this is a paper. You can find lots of paper about CSI application. They collect the CSI from the transmitter to the receiver. The transmitter could be the AP in your home, right? And they collect CSI at the receiver and running machine learning or classification algorithm to estimate or detect what is in between the transmitter and the receiver. Here, the human gesture is an example. I think it's a very interesting application and lots of companies are working on this to count people, detect people. But apparently, there will be some privacy concern. As you can imagine, if I am a bad guy, I can receive your AP signal from outside your house and try to detect something inside, right? So CSI is just a tool. Use it in a good way, bad way depends on people. So how can we protect people from the bad usage of the CSI? Here, you can find a work from one of our experimenter in the OCAW project. It's called CSI Murder. Murder the CSI. Here is the link. You can check all the details online. Basic principle is also not difficult. Since the OpenMVAC chip is designed by us, we can add some random or fake CSI before the signal leaves the transmitter. Then in this way, when you try to collect CSI via the receiver, actually you collect the combined CSI of the actual CSI and the fake CSI. So with this fake, unknown CSI, your machine learning or classification won't work anymore. So if Wi-Fi AP or some transmitter could give people this option, then people will have rights to let the CSI work or not. I think that would be a good progress of the CSI technology. Actually, follow this direction. You can imagine lots of interesting papers or research, right? Because it's attack defense, attack defense. There will be endless work, I think. Next highlight is about the IQ sample. I guess all the people in this room know what is IQ sample, because most of the SDR device offer you IQ sample streaming from a turner to your computer or from your computer to the antenna. With OpenMVAC design, definitely we can also do that. But here we present you IQ sample in a different or in a clever way, because our IQ sample capture function is under a trigger condition. We won't do streaming to the computer. Our IQ sample capture is triggered by the signal inside IPGA by the event. For instance, the event of decoding success, the event of preamble detection, the event of the channel from idle to busy, the event of the channel from busy to idle. We have defined a 32 event for you to select when you capture the IQ sample. So why do we develop this feature? The original purpose is for ourself to debug our design, because at the beginning phase, we realized our low-level MAC is not ideal. We generate a lot of collision in the air. So the question is how can we capture the collision moment and then debug further of our IPGA design? Then we find out this idea, IQ capture. For instance, how do we use the IQ capture to capture the collision? We do it like this. We set up normal communication between OpenWiFi and the PR Wi-Fi station via the main antenna. But on our platform, there is a secondary antenna. We call it a monitoring antenna. We can put this antenna via cable close to the PR Wi-Fi station. Then we set the IQ capture event that we are transmitting meanwhile we detect the RSI side from the monitoring antenna above our threshold. Actually, this means when we are transmitting, the other side is also trying to transmit. That means a collision in the air, because the packet will be overlapping the air. So in this way, we capture the collision. We analyze what happens in our IPGA, analyze the time before time after. You can define the capture position, how many pre-ICU samples could be stored. It's a powerful tool. The secondary usage could be that help you debug your receiver. For instance, if you set the trigger condition to the CRC fail, then you can capture all the backup packets that are not decoded successfully by our IPGA. Then we do analyze offline on those IQ samples in MATLAB, in Python, to optimize or improve our receiver algorithm. The third usage, apparently, there are two attenders. You can already capture two attenders IQ samples and to develop or verify your MIMO algorithm in the early phase, prepare for the MIMO development. The next highlight is application nodes. Actually, we learned this from those IC companies and SRS-LTE. We noticed that SRS-LTE also prepared lots of application nodes document online. We have set up the application nodes document directory in GitHub. Currently, there are nine application nodes covered from basic usage in different modes until the Wi-Fi 4 introduction and the packet injection. It's very useful for tests. Apparently, also the CSI acuCapture staff. That's today if you want to explore those potential areas, potential usage of our platform, do check out the application nodes. These highlights cover lots of things, essential updates, as you can see in the video. I cannot go through them all in this meeting. What I want to say is that they are really, really important. For instance, at the beginning, the user keeps telling us sometimes they encounter the kernel crash, kernel panic, or the driver crash. We spent lots of effort on this. Finally, it pushed us to do a better design for the FPG interaction mechanism. Because for a processor system, you can never assume the processor will handle the interrupt in time. The interrupt sometimes is handled in time by the processor, but sometimes several interrupts, they are queued and they are delayed and they arrive CPU or wake up CPU suddenly in a very short time. So, lots of state information have to be stored somewhere. You cannot drop them because you don't know when the processor will process them. Anyhow, lots of details between the Wi-Fi driver and IPJ hardware. You can feel the big improvement if you run our current design versus the initial design. Okay, next topic is about the community engagement or the community growth situation. I would say we did a good job regarding this. If you check the Open Wi-Fi GitHub in the one year period, we got 1.6K stars. That's already a lot, I would say, in this very narrow professional domain. 200 folks around 100 watch and several issues still open. Three internal developers you have known before and with also three external contributors, surprisingly. Although their code hasn't been merged yet, but I believe some of those codes will be merged in this year. But if you compare our project to another very hot IPGA project in the processor domain, the Rocket Chip, the leading chip design project of the RISC-5 domain on GitHub, they are online for two years and some company, I think, already tape out the Rocket Chip. They have 1.7K stars, more or less the same, but they have much more folks, watch issues, and contributors. Apparently, their community is much bigger and more active than ours. I think the reason could be related to the question I raised with you last year, why the computer science domain is so open and the connectivity domain is so close. I don't know why, but there could be also some other reasons. For instance, people maybe are so satisfied with the commercial Wi-Fi chip to play with, no matter for hacking, for security research, or for IoT application. They are quite happy why they will use open Wi-Fi. And maybe because we are lacking the killer application of features that can only be done by open Wi-Fi. Actually, we are offering some of them like the CSI-CSI-Murder IQ sample. I think what will come? For this part, we are also eagerly wanting to hear from you. The third reason could be that the Linux driver stuff, as I mentioned before, the processor-IPG interaction and IPG development are too difficult for many developers, because I think maybe the user space program is a bit easier for most of the developers, because your software run in a pure CPU environment, no external interaction. And the last reason could be that the hardware is too expensive, although we have tried hard to push our design to a cheaper platform. For instance, at the beginning, our design ran on the TESI 706, which costs you around $4,000. Now we can run on Z4 plus RF, causing only $900. It's already a big push, but Wi-Fi chip only costs you $0.5, even less. There's still a huge difference. I think this huge difference sometimes makes the people difficult to make decisions. So if you have any idea or theory about the reason, do come and tell us. We really want to grow the community. Okay, regarding the local hardware, I have been looking at the possibility for a long time. The final conclusion is like this, not final conclusion, initial conclusion. I believe the cheap OpenWiFi platform could be, the price could be as low as $200. Some of you already seen a long discussion, several days discussion on Twitter about the cheap and hardware price at the end of 2020. I guess during that time, lots of people had holidays, so we have time to do discussion. Anyway, I also discussed in person with many hardware makers in Western and in China. I feel that in the future, there will be a cheaper OpenWiFi platform. The price would be around $200, even cheaper. Let's see. New features planned. Well, we have spent one year to realize there isn't any feature that is easy. Even a small feature, a bug fix, all things are not easy, actually. So we will be very careful to plan new features regarding the difficulty and the resource we have. So in this year, we will start to work on the BiFi 6 or 11AX standard. This year, maybe the basic physical layer, transmitter, receiver will be developed. And also the MIMO, since we have prepared some basic tools like the dual-autonomic IQ capture, and we also have some master thesis working on that. So MIMO will also be started, start from the attend the switch of diversity receiver. Let's see what can we achieve in this year. Maybe during summer, we could also make a proposal to Google to hire some students via the Google Summer of Code. I don't know. Let's see. Breaking news. I think this is the big news this year. Always big news comes at the end of each year, right? Last year, open Wi-Fi this year. The BladeArf. BladeArf, they also released the Wi-Fi design for their IPGA on their SDR board. Although their design and our design, there are lots of differences. I think basically we do the same domain, design the Wi-Fi hardware, the chip via the IPGA. I think this is a good sign also for us. That means in this community, the Wi-Fi IPGA design is an actual need from the community, not only our imagination. So we will keep watching and, well, let's see. We hope this could encourage more and more people to think about playing the Wi-Fi in a very deep level, deep into driver and IPGA, to come up with much, much wild, more wild ideas. A quick recap of today's presentation. First one is that we have made lots of progress on different aspects in the first year online. I'm proud of the project and our developers. We will push further for the advanced features, like the Wi-Fi 6 and MiMO, but we realize they cannot be done in one night, but we will keep moving. And for the community, we will also try harder to grow it. Regarding this point, we do need your help. Your help is very, very welcome. Even you never use our design, but you can mention it, right, to some other people you think our design might help them. That's also a very good help. And thank you in advance. And finally, the break news, another Wi-Fi IPGA design. I think that means more people will join this deep layer, deep Wi-Fi playing domain. We will keep moving, keep watching. Sometimes competition is good for the ecosystem. All right. This time, this is the real end of my presentation. Any questions, please do communicate with us. Bye-bye. Hi. So, okay, most of all, questions from Daniel about CSI functionality. Yeah, this is quite good question. So, the basic concept is that we involve a fake channel response at the transmitter. So, if we consider a very simple single atten-communication scenario, if the real channel is AWGN, right, if we add the fake frequency selective channel on the signal, then I guess the whole link will be degradated. After all, it turns from AWGN to a frequency selective channel. But the real case is like this. It will monitor CSI for normal Wi-Fi link. There's always frequency selectivity, either caused by the RF or caused by the multipath. So, in this case, when we add some slight frequency selectivity at the transmitter, I don't think it will bring a big degradation to the whole system. So, the basic principle is that we make sure our fake CSI is added after all the processing, just maybe before the IFFT. We add the frequency selectivity on each subcarrier or after IFFT in the time domain at filter. Then the receiver side doesn't need to know this. All the whole bunch of receiver algorithm will treat the fake channel plus real channel as a whole, right? So, receiver don't need to aware that. Receiver don't need to modify it. Work with the fake CSI. So, yeah, but I believe for the MIMO case, could be more complicated. But the basic principle we should follow is that we add a fake channel after all the pre-coding, the informing operation, then let the whole inner part of the whole system as a whole, right? The fake channel plus real channel as a whole. If we follow this principle, the receiver don't need to be changed. There's one situation if the transmitter needs receiver to do the feedback of the channel state information to help transmitter to come up with the optimal pre-coding. In that case, I think the transmitter need to consider this fake channel added at the transmitter side because the feedback from receiver will include the fake channel response. But then the signal transmitters will also include that. I think lots of things could be down there in this anti-sensing of fake channel stuff. So, a second question came up just right now from Jan. Don't devices like Plur, SDR, HEC, RF, RFP, TIA and others have an FPGA that could be suitable for open Wi-Fi? And what is holding you back from ordering to do those platforms? The basic reason is that the FPGA on those platforms is too small for our current design. Our current design is a basic single antenna, 20 mq, 11n design, 11hgn. The smallest FPGA currently we are working on is 7020. On Plur 2 is 7010, IPGA, HEC, RF even much smaller. So, we need bigger FPGA. That's the reason. Okay, perfect. And then the probably last question that we can ask is what is the main motivation to work on open Wi-Fi? Like even after the awesome work you did the last year, what is the main motivation for open Wi-Fi to exist? I think the first motivation is that we realize before our project when people want to work on the chip level of the Wi-Fi design, if they want to test their new ideas on the chip level, they don't have so much freedom of free choices. They need the national instruments, Wi-Fi application framework, or they need the VARF design or the reference design for Mongol communication with restricted license condition with some license fee. So, we decide to come up with free software design on this to help people have more choice, more free choices. And in this action, we also I think lower the whole bunch of the whole reference design software plus hardware. We lower the price a lot. So, this could encourage more lab or university or group could jump into this field if they have a limited
Openwifi project, the opensource WiFi chip design, was firstly announced in FOSDEM 2020. During the unusual 2020, openwifi project has made many progresses, also encountered some difficulties. In this presentation, openwifi project would share with you: - result of user/community growth - main progresses: hardware support; performance; stability; bug fixes; new features - difficulties: community participation (FPGA people << software people); too expensive hardware - idea of low cost hardware - new features planned
10.5446/53318 (DOI)
Hello everyone, welcome to this talk, optimization of SDR applications on heterogeneous systems on chip. This is a joint work from Arizona State University, the University of Wisconsin Madison, the University of Arizona, Carnegie Mellon University and the University of Texas at Austin. Recently proposed domain-specific systems on chip, DSSOCs, combining general purpose, special purpose, and hardware accelerator cores, optimized the architecture, computing resources, and runtime management by exploiting the application characteristics for a given domain. As such, DSSOCs can boost the performance and energy efficiency of software-defined radio, SDR applications without degrading their flexibility. Harvesting the full potential of DSSOCs depends critically on integrating an optimal combination of computing resources and their effective runtime utilization. For this reason, the design space exploration process requires evaluation frameworks to guide the design process. However, given the design complexity, the evaluation framework should enable rapid, high-level, simultaneous exploration of scheduling algorithms and power thermal management techniques. Driven by this need, in this study, we developed a system-level simulation framework, DS3, that can perform design space exploration, evaluate scheduling and resource management algorithms. Our simulation framework differs from full system simulation and hardware emulation and provides the following advantages. First, DS3 is very fast compared to full system simulators, like Jam 5, where simulation of even few milliseconds of workloads takes hours. Second, the controlled level of abstraction for both applications and target platforms enables rapid design space exploration. Furthermore, DS3 supports modifying existing scheduling and dynamic thermal and power management, DTPM, algorithms, as well as implementing new algorithms, and integrate them with the framework easily. Finally, DS3 provides a plug-and-play interface to choose between different scheduling and DTPM algorithms. The organization of the DS3 framework is depicted in this slide. The resource database contains the list of processing elements, PEs, including the type of HPE, capacity, operating performance points, among other configurations. By exploiting the deterministic nature of domain application, the profiled latencies of the tasks are also included in the resource database. The simulation is initiated by the job generator, which generates application representative task graphs. The injection of applications in the framework is controlled by user-defined distribution, for example, and exponential distribution. The DS3 framework invokes the scheduler at every scheduling decision epoch with a list of tasks ready for execution. Then, the simulation kernel simulates task execution on the corresponding P using execution time profiles based on reference hardware implementations. Similarly, DS3 employs analytical latency models to estimate interconnect delays on the SOC. After each scheduling decision, the simulation kernel updates the state of the simulation, which is used in subsequent decision epochs. The framework aids the design space exploration of DTPM techniques by utilizing these power models and commercially used dynamic voltage and frequency scaling, DVFS, policies. DS3 also provides plots and reports of schedule, performance, throughput and energy consumption to help analyze the performance of various algorithms. DS3 comes with two built-in heuristic scheduling algorithms, minimum execution time, MET, and earliest task first, ETF, schedulers. To schedule the tasks in an application graph, represented as a directed acyclic graph, DAG, to a set of P's in the system, MET scheduler only utilizes estimated execution times of the tasks. ETF scheduler, however, makes use of the information about the communication costs between tasks and the current status of all P's in addition to estimated execution times to make a scheduling decision. The framework also includes a constrained programming, CP scheduler, developed based on the concepts of IBM C Plex tool. Tasks in an application graph are represented by interval or decision variables in CP scheduler. Then, a set of constraints are applied to these interval variables to formulate the scheduling problem. This slide explains a couple of the constraints used in the formulation namely, span, alternative, no overlap, and end, before start constraints. For example, span constraint ensures that a combination of interval variables or tasks represent an application graph, whereas end, before start constraint, accounts for the communication cost between two tasks. If we define T as the tasks in an application DAG, D as the DAGs under consideration for scheduling and let P to represent the number of P in a system, then the formulation of CP scheduler as a constrained programming model is given on the right. The objective function of the model is to minimize the summation of latencies of application DAGs, and hence, CP scheduler provides an optimal schedule in terms of application execution time and serves as an upper bound for comparison. We describe the built-in scheduling algorithms in DS3 using one of the most used canonical task graphs shown on the left before analyzing the results from real-world applications. In this task graph, each node represents a task and each edge represents average communication cost across the available pool of PEs for the pair of nodes sharing that edge. The computation cost table in the middle indicates the execution time of the nodes on each P. Then DS3 framework generates Gantt charts to visualize the generated schedules. This allows the end user to understand the dynamics of the scheduler under evaluation. DS3 comes with six reference applications from wireless communications and radar processing domain. The Wi-Fi protocol consists of transmitter and receiver flows as shown on the left. It has compute-intensive blocks, such as FFT, modulation, demodulation, and Viterbi decoder, which require a significant amount of system resources. DS3 also includes a simpler single-carrier protocol with a lower bandwidth than latency requirements. Finally, the framework has two applications from the radar domain as part of the benchmark application suite, one, range detection, and, two, pulse Doppler which are shown with the block diagrams on the right. As explained earlier, in DS3, applications are represented as directed acyclic graphs. This slide shows DAG's representations for Wi-Fi transmitter and receiver, pulse Doppler, and radar correlator. While radar correlator is only consisted of seven nodes, pulse Doppler DAG has 451 nodes. This abstract modeling provides an easy way to implement and simulate the target applications in DS3. DS3 enables comparing performances of schedulers for streaming SDR applications under a target SOC. In this case study, heterogeneous SOC under consideration is consisted of 16 general purpose cores and hardware accelerators as shown on the table left. In addition, a wide range of the workload scenarios with SDR applications are considered by sweeping the application injection rate. As seen on the figure right, scheduling algorithms exhibit different trends of average execution time. Thus, DS3 helps user to determine the scheduling algorithm that is most suitable for a given SOC architecture and set of workload scenarios. For the DTPM case study, the same SOC configuration from the previous case study is utilized. The study explores eight frequency points for the big cluster and five for the little cluster using a 200 MHz step. All possible DVFS modes were evaluated, that is, all possible combinations of power states for each PE, in addition to on-demand, power save, and performance modes. The figure on the right shows the Pareto frontier for all configurations. The on-demand and performance policies provide low latency, with high energy consumption. While power save minimizes the power, at the cost of high latency, which results in sub-optimal energy consumption due to an increase in the execution time. The best configuration in terms of energy delay product, shown as Green Star, uses 1.6 GHz in four active cores for the big cluster and 600 MHz in three active cores for the little cluster. The final case study illustrates how DS3 can be utilized to identify the number and types of P's during early design space exploration. The study employs the benchmark applications to explore different SOC architectures. All configurations in this study have four big RMA-15 and four little RMA-7 cores to start with, and DS3 guides the user to determine the number of configurable hardware accelerators in the architecture. In a grid search, the study varies the number of instances of FFT and Viterbi decoder accelerators. The table on the left lists the representative configurations out of 20 configurations. Each row in the table represents the configuration under investigation, with an estimated SOC area, average execution time, and average energy consumption per job. The figure on the right plots the energy consumption per job as a function of the SOC area. As the accelerator count increases in the system, the energy consumption per operation decreases. This comes at the cost of larger SOC area. For this specific workload, configuration 3, that is, an SOC configuration with two FFT and one Viterbi decoder accelerators, represents the best trade-off. As demonstrated in this study, the DS3 framework provides metrics that aid the user in choosing a configuration that best suits power, performance, area, and energy targets. In this slide, a short demonstration of a scheduling case study is provided where different workloads with varying injection rates for an SOC configuration are executed with the built-in schedulers. The runtimes for heuristic schedulers are much lower compared to the runtime of CP scheduler, since CP scheduler dynamically calls constrained programming model to solve for a schedule based on the application arrival rate. The resulting schedule, then, is stored to a lookup table. The demo shows that the DS3 framework executes these workloads and plot a figure at the end to compare the schedulers in terms of average execution time. We see that, for this demo, while heuristics perform similar to each other, CP scheduler provides the best result. As a summary, we developed a Python-based, system-level simulation framework, DS3, for domain-specific systems on chips that enables the comparison of different scheduling algorithms, evaluation of dynamic thermal and power management techniques, and extensive early design space exploration. The framework is calibrated against two commercially available platforms, which are, Odroid XU3 and Zincs ECU 102, with an average error of 5% for all cases. We envision that DS3 will pave the way for an easy and rapid exploration of domain-specific systems on chip, and to this end, DS3 is an open-source tool and can be accessed via the link below. Finally, this work is sponsored by Air Force Research Laboratory, AFRL, and Defense Advanced Research Projects Agency, DARPA. For more details, please take a look at our article and download the code to conduct similar studies. Thanks for listening. Welcome, Steven, to NCLD. Oh, okay. We are into Q&A. Sorry, I wasn't in the room slightly earlier. So, thanks for your presentation. One of the things that we haven't gotten any questions from the group, although I'm seeing one that wasn't tagged, asking, according to the simulation, what's the latency between the last received Wi-Fi signal sample and the CRC being sent out? Is that something that you can address? So, okay. The question is asking the, okay, so what is basically the latency for the receiver packets? Yes. So, I mean, so in this profiling that we need, we will use basically as you see old platforms. And since we wanted to have these simulations not diverging from the reality, we wanted to have this profiling and the other thing is that we don't have the actual chip. So, the performance is a little bit, you know, kind of low, meaning that for one packet of received, we are, I think, executing around 250 microseconds, whereas for the transmit, it's about 69 microseconds. So, this is again the end-to-end latency for one pack. But again, the reason that we wanted to use those systems, I think there was another question saying that, you know, the SSOC using Zing FPGA. So, not really what we are doing with this high-level simulation. We are using the profiling result from those platforms for the unhorses. And then we are trying to integrate our, either, you know, the accelerators that are currently built on and the latency and the power numbers. And that's why we are trying to come up with this SSOC implementation. And hopefully, once we have the prototype of the chip, we will update these numbers and we will try to do, you know, color-based simulation framework accordingly. Okay. I was looking myself on the GitHub repository and I saw that it had received a couple of updates over time. Is that an ongoing plan to continue contributing code into that? So, the thing is, so right now, since our project is, you know, really continuous, it's a forward project and right now we're in the third year. And as you see from the GitHub, there's not much thing going on. But internally, we have updates over time. And based on that, we have a couple of papers already published. Maybe I can try to show you the links, like how we can do those invitations, learning for both scheduling and also for the DTP analysis. And that's why we are kind of limiting ourselves on the GitHub, the open version, so that, you know, we don't really reveal much to our competitors. But hopefully, the plan is that once we, I think, at the end of this year, we'll be more public, meaning that we will try to share all our code related to all our papers that we have. So hopefully, then it will be more kind of open to public and people will be able to play with all the things that we did. And if there are some questions or maybe like people want to contribute them, hopefully we will be able to engage more. Okay. Checking for other questions here before I start asking my own. So I saw you were using an Odroid XU3 and a Zinc board as well for some of the scheduling. What sorts of other devices were looked at or is there a reason why you went with those as platforms? So the quick answer is that at the time we had those platforms in hand and that were readily available and we had some experience with them. And as a project continues, we right now, for example, have some ideas what to put as ARM cores and what type of accelerators will be in the design. For example, now we are looking at, I think, Raspberry Pi 4, which has, I believe, ARM Cortex A72. And we are all, we all did right now, currently doing some analysis on that to see, you know, what will be the numbers for this profiling and also the power. And then we are also trying to use some kind of compile optimization to see what kind of performance case we are getting. So again, those were only for us for a reference point. Again, simulations can easily diverge from the reality and we wanted to keep our piece, you know, in the some kind of solid point. And hopefully when we have the first prototype and then we have as a long as the project continues, then we have more information about our accelerators and also the things or the medium that we are using for the data transfer, we will keep updating the simulator and hopefully we will have, you know, more up to date or more reasonable results for which match the current, maybe procedures or the, you know, protocols. Okay. We did get another question in. I asking about the decision to model the communications costs in the way that you did compared to adding a node with a fixed cost, I guess, or a cost model, kind of how that would affect the simplicity of processing the model. So, yeah, that's always a big question. So especially if you consider about, you know, manufacture and processing elements, you know, at some points in that match what will be the, you know, real data traffic and how it will affect your scheduling. So with this, currently, what we are using, we are having this data from an analytical models, which is we can basically saying that they are kind of constant. But in the future, we are also trying to implement our own and we'll see model inside having that so that you will have the real traffic and we will try to accommodate those dynamically changing data moment host before it's scheduled. And that's also in our radar. But again, in the time being, we just wanted to go with some analytical numbers and from our models, we see that those do not change much. Again, it's based on the workload you are working on and also some other criteria. But again, that's also under consideration for the future. Okay. I have you looked at risk five as a possible target. There are more and more hard processes coming out as well as a lot of FPGA models. No, we didn't look at it right now. But it might be possible in the future. Hi. As this is an ongoing project with some competitive elements to it, it's always a little bit hard to ask future plans and specific areas that you're interested in looking at. But is there anything that you can share there? For example, again, this project is kind of a big project that we are dealing with a big team really. So this is basically this framework at the higher level as you see from the slide tag and also I believe I shared the paper. So this is basically helping us to design time. But really, we really need to implement this at the lower level and really in the real systems to run time and how it looks. Because again, really, what you get from simulation is some it might be garbage in garbage. But especially it's like one that talk that will happen in one hour. So Joshua Mac will present a runtime environment which has a couple of the ideas that is gained from this high level simulations. So in that, you will be able to see really how we are doing this scheduling in the real systems at the runtime by the experience from or the ideas coming from this high level simulation. So in terms of future work, again, always we were thinking that at the high level, we will always deal with this algorithmic challenges instead of really going into the platforms and how we can implement this and how we can really get this result out from the real system. And by having so we just basically separated concerns and we just only focus on this algorithms and the concepts or theories. Then we have these results that you really wanted to go on and implement in the real systems. And again, I will try to share on the chat one of the again, the papers that we came out about some kind of machine learning, they look basically in this high level framework and how we try to implement that into the real system. So basically as a future work, we want to really move these ideas into the real platforms. And when we have this hopefully the prototype or more closer design that we have, if we can find something, we will try to implement and we will try to collaborate our simulation framework accordingly. So we want to go basically again to the long story short, have these scheduling ideas in an intelligent way and can we use some machine learning tools or algorithms to do the job? Okay. We've probably exhausted the chat room, I think. And most I just want to say thank you very much for coming and giving the talk. And being here to do the Q&A. Is this your first time attending FOSDM as much as this? Yes. That's the first time, yes. Well, I can heartily recommend coming back next year when hopefully we'll have more of the authentic original experience. Yes. Having a room full of 100 people is definitely. Yeah. Actually last year our team also some part of our team also joined, but I wasn't able to ammog the lucky ones that went to the Belgium. But this year we are attending with this virtual platform. Hopefully next year it will be in person and hopefully it will be there again. Great. Okay. We have two minutes left before this Q&A will wrap itself up and move on to the next talk. Is there anything else? Have you used Gini Radio or other open source frameworks for any total applications? No, but currently we are having discussions with the Gini Radio team since we are doing this somehow interested. So, interesting scheduling at the high level and then trying to implement those in the low level in the real platforms. And we are having talks with the Gini Radio team and hopefully we will have something going on and our relationship will move forward. But I haven't used the Gini Radio on myself. We're getting lots of thank yous from chat. So, I'll pass that on as well myself. And yeah, I think we can close this as much as that's possible. Yeah, thanks, and I will shortly share the links for the other two papers coming out of this framework for the interested audience. Thanks. For folks who would like to get into a much deeper conversation than me as you're talking head, feel free to join this chat room as soon as the bot and we can do Q&A in the chat is what we're finding works best. You are able to join this video conference once it goes off the air, but that hasn't seemed to work so well in the past few depowers.
Recently proposed domain-specific systems-on-chip (DSSoCs) optimize the architecture, computing resources, and run-time management by exploiting the application characteristics for a given domain. As such, DSSoCs can boost the performance and energy-efficiency of software-defined radio (SDR) applications without degrading their flexibility. Harvesting the full potential of DSSoCs depends critically on integrating an optimal combination of computing resources and their effective runtime utilization. For this reason, the design space exploration process requires evaluation frameworks to guide the design process. Full-system simulators, such as gem5, can perform instruction-level cycle-accurate simulation. However, this level of detail leads to long execution times and is beyond high-level design space exploration requirements. In contrast, hardware emulation using Field-Programmable Gate Array (FPGA) prototypes are substantially faster. However, they involve significantly higher development effort to implement the target SoC and applications. Given the design complexity, there is a strong need for a simulation environment that enables rapid, high-level, simultaneous exploration of scheduling algorithms and power-thermal management techniques. To this end, we present DS3, an open-source system-level domain-specific system-on-chip simulation framework that targets SDR applications. DS3 framework enables (1) run-time scheduling algorithm development, (2) dynamic thermal-power management (DTPM) policy design, and (3) rapid design space exploration. DS3 facilitates plug-and-play simulation of scheduling algorithms; it also incorporates built-in heuristics and a constraint programming-based scheduler to provide an upper bound of performance (i.e., optimal schedule for a set of applications and an SoC configuration) for users. Hence, it can be used to develop and evaluate new schedulers that can be integrated into GNU Radio. DS3 also includes power dissipation and thermal models that enable users to design and evaluate new DTPM policies. Furthermore, it features built-in dynamic voltage and frequency scaling (DVFS) governors deployed on commercial SoCs. In this talk as we discuss the DS3 capabilities, we will present a benchmark application suite with applications from wireless communications and radar processing domains including WiFi TX/RX, low-power single-carrier TX/RX, range detection, and pulse Doppler. We will conclude the talk with design-space exploration studies using these applications.
10.5446/53319 (DOI)
Hi, my name is Mark Lickman and today I'll be talking about a course that I taught at University of Maryland as well as an online textbook I created called PiSDR that's based on that course. I'll start off by talking about the course and then move on to the textbook itself. So a couple years ago is the first time I taught it. It was called Intro to Wireless Communications in SDR and what was really unique is that this course was taught within the computer science department to undergrad students as an elective. So these were undergrads in their senior year mostly. And the only problem is these CS undergrads had little to no background in digital signal processing or wireless communications or really any kind of signals and systems. The only relevant topics they may have had were probability and statistics and then if they were focused in networking they may have taken some higher layer networking classes. But in general they had really no background. So the challenge was to go from nothing to having these students able to actually create SDR apps in one semester. So I had to quickly cover topics that would normally be covered in full lectures like within an ECE class. So my course was essentially a bunch of grad level ECE classes condensed into one and heavily watered down. So in order to teach a course on SDR to students who have no other background you can't just jump straight into the SDR. SDR as you may know it builds on top of many foundational topics. The big one probably being digital signal processing. And you really have to get a basic understanding for several of these before jumping straight in. And this is also similar to someone trying to jump into Gini radio who has no other background. There's a lot in Gini radio to learn on its own and they'll probably have an easier time if they first get a better understanding of DSP and SDR so that they're not trying to learn two things at once. So in that regard my online textbook which I'll talk about later is a really good starting place for someone who wants to get better at Gini radio but doesn't have all the foundational background. Now why was this course taught in CS? Well you may wonder why not just create an SDR course within the ECE department at the grad level with prerequisites of like intro to communications, signals and systems, maybe a basic digital signal processing course as a prereq like most universities do. Well ultimately doing stuff with SDRs often ends up leading to just a lot of software development and when you start looking at job opportunities out there there are loads of jobs that are related to wireless, DSP, SDR but that are extremely software development heavy like ones that require strong software development skills because it involves working on a team but also require just a little bit of technical background. Your typical electrical engineer is going to learn a ton of the foundational theory in school but probably not as much on the software development side. Your typically is maybe one or two courses related to coding at all in an ECE curriculum so any experience they'll get regarding coding is probably just going to be from personal projects or internships. So this is why CS students are perfect candidates in terms of when you start looking at employment and that kind of thing. So the point of my course and the textbook is to provide a lighter way to get the foundational background needed in a fairly short period of time. I try to cover only the necessary topics to get you to a point where you can be pretty dangerous with SDRs. As far as the course itself the students used Python as well as Gini radio throughout the semester and they each got one Pluto SDR to borrow. I found that the Pluto's worked pretty well. They could look at 56 megahertz of spectrum at once even with the USB 2.0 bottleneck. They didn't have to stream 100% of samples. They could just take batches of samples over time at some duty cycle so it worked out pretty well. They used the Python API as well as the Gini radio source block. We did not do any transmitting. It was only receiving. My thought was because this was a basic course getting the students to a point where they would be comfortable transmitting would be too close to the end of the semester to even bother with it. During the course I tried to make heavy use of animations, diagrams and visual demos often using Gini radio for the demo. Whenever possible I tried to get students to code up some example of a concept in order to learn it. I really tried to learn by doing because my thought is that you might as well learn a concept as well as possible before jumping into the math and the underlying equations. Then if you're interested in the math you can always go back and look at it and you'll be in a much better place to really understand the math at that point anyway. I believe that most people are not great at learning a concept just by looking at the math. I'm sure there are some out there. I think most people learn better just through examples and visualizations and actually using the concept. Just to give you a taste of what I mean during our frequency domain lecture when we start to learn about Fourier transform properties, instead of deriving the equations for each property, the students actually took FFTs in Python and used example signals to actually learn those properties. For example the scaling property you can simulate by simulating like a pulse train and then when you simulate a higher data rate, more bits per second using shorter pulses, you can see how in the frequency domain the spectrum used scales in an inversely proportional manner. As part of the course like I mentioned I used Gini radio for creating little demos and explaining topics but I also used Gini radio for in-class exercises. After we learned digital modulation we did a little exercise where I transmitted a BPSK signal and then the students used a partially completed flow graph to try to sync and decode that signal. Like I had a slider for the frequency sync and they took like a screenshot and tried to pull out the bits visually. Gini radio also was great when it came to the student projects because it let them do more and less time. For example they could use an existing flow graph that someone else created and then put their own little twist on it. So now changing gears I'll talk about the textbook I made after teaching this course a couple times using experience I gained during the course. So I couldn't find anything else online that introduced SDR and DSP in a way that was super light on the math but involved coding in a way that forced the reader to kind of do everything themselves. Like there are plenty of resources out there on DSP and SDR that are heavy on math and there are ones that are more coding centric but I found that those all used like their own companion library that was written by the author. So if you just looked at the code examples in the textbook you'd see a bunch of like custom functions and stuff. But my goal was really to create a textbook that used like pure Python so like NumPy, SciPy, Matplotlib but no like fancy library that I created. All the code was just kind of there in the textbook. I was also heavily inspired by DSPillustrations.com. I had used a lot of the examples from that site during my course and I think that may have even been what had given me the idea of making an online textbook. Probably a full kind of at your own pace curriculum but using DSPillustrations.com's approach of like code and animations instead of equations. So definitely check out DSPillustrations.com as well. So this is the textbook. The landing page is just like the index essentially so you can always jump to a topic you're interested in but for people who are new to SDR and DSP I would definitely start at the beginning. In the introduction I talk about how the textbook is meant for anyone who's already good with Python and relatively new to DSP, Wireless Coms, SDR and is a visual learner who prefers animations over equations. And I think a perfect candidate would be someone with maybe a CS degree who's looking for a job that involves Wireless Coms or SDR. So someone who's a good coder but really doesn't have much background. And here's some examples of the textbook although you can certainly look at yourself afterwards. The Pluto SDR comes into play about halfway through so everything before this point is all Python based, no actual SDR. I've got instructions for how to set it up and everything like that. The very last topic is about synchronization that was the most recent one I added and it's pretty thorough about how to simulate delays in frequency offsets and then how to actually perform synchronization in Python. I wanted to add a quick note that you do not need a Pluto SDR in order to make use of this textbook. I'd say 90% of the Python examples do not involve actually using the Pluto. In fact, I've tried to isolate the Pluto specific content into its own chapters. But anyway, you can check out the textbook on your own later. So the textbook, it's still a work in progress. It's maybe like 80% done and I'll be adding additional chapters over time. But I'm hoping that the content that's currently there is polished enough to be worth using. If you're curious, I used Sphinx as the framework for actually making the textbook. And I was really impressed with how well it renders on mobile with me doing very little work. It's actually fairly readable on a phone, which I didn't expect going into it. So why Python? Well, I personally love Python. I think that when you're learning DSP and wireless comms and SDR concepts, it's nice to work at a high level so you don't get too caught up in the code itself and you don't have to spend as much time debugging syntax issues and that kind of thing, which tends to happen for me when I'm writing C or C++. Now obviously DSP techniques are most often implemented in C and C++ or on an FPGA. But when you're talking about learning, I don't think it really matters. You can always port something over anyway. I want to point out that PySDR as a textbook will never exist in an actual hard copy format due to my heavy use of animations. And I only plan to add more animations over time. I'm also not looking to make any money off of it. I don't have any PayPal donate link or anything like that. I'm just hoping that other people can get some use out of it. Like I mentioned, my course and the textbook are very light on math. I think visuals are better than equations. And here's just some examples of animations and visuals that I have in my textbook. So as far as plans for future chapters, I definitely want to add one on equalization and pilots. I already have one on sync, so that's kind of the last step. And then I also need more Pluto SDR examples like in Python. There's really just a couple right now. And then I want a chapter that kind of brings everything together. That includes a full communications link as an example, so maybe ADSB. So yeah, feel free to check out the textbook at pysdr.org. And you can always email me with comments and suggestions at pysdr.vt.edu. I made a GitHub page for the textbook itself, so you can even put in issues and PRs if you want to help out with it. Thanks for your time, and I'll be in the chat to answer any questions.
I discuss the challenges of teaching Digital Signal Processing (DSP) and Software-Defined Radio (SDR) concepts to those without any background in the area. At the University of Maryland I created an elective for undergraduates in the CS dept. that introduced DSP and SDR in a hands-on manner, and have since taught the course twice. During this course, students learn basic wireless communications and DSP concepts, and how to implement the techniques onto SDRs. Additional course learning objectives include digital signals, filtering, frequency domain, digital modulation, noisey channels, cellular, and IoT. The course utilizes open-source SDR toolkit software including GNU Radio and Python libraries, allowing students more interesting and engaging assignments/exercises and more advanced concepts to be explored. Every student had a PlutoSDR to use during the semester. What is unique about this course is that this material is typically taught at the graduate level within ECE, spread across numerous individual courses. CS students, at least at our university, do not get exposed to any DSP or signals background which is normally required to learn about SDR using traditional methods/textbooks, so they must start from scratch, which is why this course has heavy use of graphics, animations, and examples. As such, this course does not dive as deep into the mathematics behind the theory as a normal graduate level ECE course would. There is much more emphasis on "learning by doing", and actually creating SDR applications. In addition to the course I have created a free online textbook called PySDR, that is based on the material I taught in my course, which anyone can use to learn DSP and SDR using Python. My textbook does not use any custom libraries or code, it's essentially showing how to use straight Python (e.g. mostly numpy, scipy, and matplotlib) to actually do DSP and create SDR applications. Through feedback I've gotten from people using this online textbook, I have learned about what it takes to teach DSP and SDR to folks in a non-university setting. The source code used to generate the textbook (using Sphinx) is hosted on GitHub, so that readers can submit issues or even PRs, to date there has been several contributors. I'm hoping this presentation can show that you don't need to be a EE with a masters degree to dive into DSP and SDR.
10.5446/53321 (DOI)
So, good morning slash afternoon everyone. My name is Gonzalo Carracedo. I'm the main developer of Sigdiger and this is the third time I'm recording this talk because the first time it took me like around one hour and 15 minutes and the second time I was far from home and far from all my toys and the demo was kind of horrible and at the end of the talk I was already saying gibberish. So I was able to come back home finally and now that I feel more comfortable with everything I hope this is going to be way better than the previous one. The thing is there are so many things to talk about Sigdiger, so many aspects that's going to be almost impossible for me to go really in depth in all of these topics. In particular there is a demo part that is going to be extremely brief because there are multiple examples all over the internet, most of them coming from Aaron Fox, so feel free to check them out and I'm also going to talk about its internals and yeah that's pretty much what this talk is going to be all about. So let's get started right away. Well first of all some constant information just in case you want to talk to me feel free to send me an email or Twitter or whatever I will try to answer as soon as possible. I don't promise anything because right now I'm in the middle of the first term exams for my master's degree so things are kind of slowing me down right now. So anyways I will try to be as quick as possible to answer you. So okay, Sigdiger, what's Sigdiger? Maybe a few of you have heard about it. Sigdiger is a free free as in freedom graphical signal analyzer which yes it's another one like UQRX, QDXDR and one of the many others that you can find over there but with the idea of making it simpler at least for its main use case which is reverse engineering of radio signals you don't know anything about. This software is actually the continuation project from an earlier pet project of mine five years ago. So in order to understand some of its features a bit of history is going to be necessary so I'm going to start with this talk. So how did all this start? Well it was 2016, I was bored, really bored and I remember having a very basic knowledge about radio propagation in general and data acquisition. So I had a bunch of spare time back then and blade RF and some tutorials I was checking out in RTL-SDR.com about receiving InMOSAT satellite signals in the L-BAN which is around 1.5 gigahertz, circularly polarized. You could receive these signals with a regular SDR device plus a software called JARO you could download and use it either in Windows or Linux or maybe in Max, I don't know. But it was just as simple as that as doing these two things together. So well of course it wasn't that simple and I had to use a specific antenna for these frequencies since I didn't have any, I had to build it myself which by the way if you guys are one of those do-it-yourself freaks this is the perfect spare time project for you. I highly recommend it and these are just one of the antennas I came up with to receive these signals. So yeah well I downloaded software and I was able to receive some messages and in order to make this work JARO only received signals from the sound card so I had to demodulate them first using a software like GQRX which received the signals, modulated them with the single-side band modulator to the sound card and then JARO was able to modulate the messages being sent from the satellite. These messages were actually pager messages, were very technical in nature, were like with no encryption at all, raised to receive. So they are intended mostly for aircrafts, they are pager messages with technical information for aircrafts. So okay it was fun back then and it was a nice summer project however somehow I felt like I was expecting something else, something more. It was like okay I'm actually a hacker because of doing this. I was seeing when I did this in GQRX many other signals, in adjacent frequencies with different shapes and coming from different satellites because as I changed the pointing of the antenna I was able to receive different signals and more importantly as it wasn't modulated in these signals I was seeing this strange diagram with these four clusters of points in the bottom right of the window. I couldn't understand, I knew it was somehow related to the ability of the software to demodulate the signals but unfortunately that was pretty much it. I was kind of challenging myself to do something more interesting. I also asked myself what if I knew anything about the signal? Would I be able to demodulate it because in this case I was using some third party software that I really knew about those signals but if I didn't know anything about them would I be able to demodulate them? In case I was able to demodulate them would I be able to decode them like extract bits and information and the actual data inside the signals? It turns out there is a whole field about this called automatic modulation classification which back then my references were a workshop by Balint Sieber in Defcon I don't remember right now and Daniel Estevez which I contacted personally and gave me the first hint to start in this fantastic world. I was also reading papers on the subject back then by Darekawamoto about things like rigorous moment based automatic modulation classification blah blah blah which by the way the paper tactile is actually really really long however the talk he gave in JRCon in 2016 was very interesting because he did exactly what he was looking for with Gini rated blocks so that meant that well it wasn't with Gini rated blocks I think. I don't remember it now because it was too many years ago but I remember getting very powerful insights about these kind of techniques so if you want to start these guys are a good source in JR. So okay let's define this better which are going to be the goals of the project. Well I had a very extremely basic knowledge of DSP back then so I felt that I needed to acquire skills so for me the best way to learn something is to reinvent the wheel okay like code your own DSP library if you want to learn DSP the best way to do is to code your own DSP library in C and learn the hardware this is the beginning of a project some of you may know about called SecureTills that you can download from my GitHub page. And also write this small application called SUSCAN which comes from SecureTill scanner which was going to well back then had a N-curses interface with minimal human intervention that was supposed to perform automatic channel detection, SNR detection, implement all these AMC strategies with an integrated PSK modulator and dump everything to a file or something like that. It was going to use well this is going to be based on the device libraries of the different SDR devices going to perform direct interaction with these libraries with no SDR extraction library like SOPA SDR alone. However this was actually a very silly thing to do because the design was terrible and there were many poor choices I mean this was a very hacky project in particular the N-curses interface was a mistake it's a library of the pre-solid era it's unmaintainable with a very weird memory management so I replaced it by more practical GTK plus 3 interface only because it provided me with a native C API so the whole project was already in C so these looked like the most reasonable choice back then and I redesigned the whole internal API of SIGDIGAR so I wrote a small linear radio like API based on modules you can plug with a message passing API for communication implementing a client server model so I could communicate with the library sending requests and waiting for responses well not the library but the caller of the application and it all supported native raw IQ capture files like the ones that a linear radio saved when you click on record and this is what the new SUSCA interface looked like which is something more or less reasonable sorry for the colors I was doing experiments with some kind of night vision thing but well you already get the idea and again another example of SUSCA trying to modulate analog video from one of those small drone cameras that broadcast what they are seeing in 5.8GHz frequency modulate however this was actually horrible okay I mean it was still a hack even after those refactors everything was still a hack and my CPU was suffering it was suffering it because the blog based flow graph was poorly implemented and it was out of my mind I mean there was a whole of concurrency overhead that made me replace the whole thing by something called the worker approach that I'm going to describe later yeah the channelizer the channelizer was implemented in a very naive way it was performing filtering using FIR filters in real time so that meant that if you wanted to select a small channel or a narrow channel you would need a filter with many many many coefficients that's computationally hard and then you had to perform this image with that and so I replaced that channelizer which was very silly in general and I used a more advanced and I think that more common FFT based channelizer which again thanks to thanks to the help of Daniel Estevez another thing was certificate plus the GTK plus 3 interface it was another mistake I used it the only reason I used it was because of its native C interface however it's very slow and the reason it was very slow it's because of its graphical interface in the drawing of GTK of GTK widgets in general it's based on Cairo and of Cairo and Cairo is very slow if you want to perform many drawings or many updates per second is going to hog your CPU so it was very similar it was extremely difficult to bypass that API and that would make the code very maintainable and also the boilerplate every time I wanted to add more features to the interface all the boilerplate code made it more and more and more difficult to maintain however fortunately most of the Souscan core functionality was already detached from the GUI okay it was actually two separate libraries inside however there was a few couplings due to the way the analyzer object was implemented but it wasn't a big deal I could detach them completely and apart from that it was it wouldn't it wouldn't be a problem to switch interfaces so all this motivated the greater factor and this great factor implied many things like discarding all the ad hoc SDR compatibility code by SOP-SDR which would give me automatic compatibility with most SDR devices in the market I also removed the GTK plus 3 support and all references to the GUI now Souscan is just a real-time signal analyzer library called LibSouscan which exposes a big server class called Souscan analyzer the Souscan analyzer is this object implementing the set of analysis operations you are performing over a signal source it was mostly client independent apart from the fact that you would need to use C or C++ to use it this API was ready like finished and well more or less finished was usable in December 6th 2018 okay it wasn't until some months later that I start to work in the QT5 frontend was July 5th 2019 I had to use C++ I don't like C++ I actually I hate it because I think that that's same taxes in coherent and with many ambiguities and difficult to read however QT5 is extremely fast not only fast it's learning curve a learning curve is so smooth that I was able to have a minimally working GUI for CDIR in almost one month okay well not only because QT5 I was I mean I copied changeously the QRX spectrum widget and placed it with a different name in InstaDigger but apart from that the rest of the code was written from scratch and was interacting directly with LibSouscan okay and you know it was so successful back then this this refactor that even RTL is the ad.com dedicated me a post in their web in your blog and this was actually very good news for me because this made many people aware of this project and people started to download it and test it and file bugs and issues and feature requests so it was great and now SIGDIGER is more or less serious project thanks to exactly this this post so thank you very much people from RTLsdr.com because thanks to you SIGDIGER is now something okay so what is SIGDIGER now well SIGDIGER is a free and graphical signal analyzer it's an analyzer in in in bolt letters because it's supposed to let you analyze individual frequency multiplexed signals this is it should let you select a signal in the spectrum isolated and and analyzing it works either with well both with small bursts and and also continuous signals with different different modulations like PSK, ASK, FSK I will add support for additional modulations in the future it will also let you watch generic analog TV either well right now it has presets for Paul and TSC but you can configure it from the interface for basically any any kind of analog TV signal and of course includes all the previous AMC features provided by the old interface of SUSCAN okay and also it lets you listen to analog radio has bookmarks, bandplans and even a panoramic spectrum but I think it's pretty cool and yeah just to give you some performance figures I did some tests in small laptop well I think maybe past year yeah around this month past year was a computer with i5 2.3 gigahertz to cores for threads in total and I compared CPU usage with GQRX using the same signal sources and I was measuring at least that 20% less CPU usage with equivalent configurations however QBGSDR was still less CPU intensive than CIG Digger so of course there is there is room for improvement in this sense right and we're going to process the speeds using a spectrum only 16,000 fft beams 60 updates per second of the spectrum I was able to process signals as fast as 108 ms per second if I incremented the fft size the maximum processing speed was somewhat smaller if I was attempting to perform live FM the modulation of the signal using I don't know 333 kilohertz bandwidth the processing speed would go down to 70 ms per second and if you try to do an analog TV the modulation I couldn't do it faster than 5.6 ms per second again I think that there's a whole lot of room for improvement here but however this is how things are going right now so okay demo time again I'm going to give you a very quick demo of two big features of CIG Digger okay the first feature is going to be based on a signal I captured from the Inmersat satellites back in 2016 is to show you how fast it is for me to select a signal gas parameters from it and modulate it and another demo of the panoramic spectrum feature just to see how easy it is from CIG Digger to inspect very broad ranges of frequencies okay so I'm going to stop this here and I'm going to move CIG Digger to this window okay CIG Digger is right now configured to read samples from a captured ID from the QRX again in no it was in 2016 it was 2017 okay I was it was mistaken but in the end it's going to be the same thing so this is what the signal I captured back then looked like okay this signals over here are actually the R-Cars signals coming from the Inmersat satellites these are the 1200 bits per second signals and this small one this narrow one is the 600 bits per second signal this is the other signals that I don't care right now about but let's say I want to inspect this one so the first thing I do want to see a signal like this with a very compact spectrum coming from a satellite my guess would be that it is a PSK signal this is a guess and there are examples of signals with a compact spectrum that aren't PSK at all for instance OFDN or GMSK however this is personal experience I see many phase modulated signals coming from satellites probably because there are many reflections when you receive signals that are coming from satellites but have that in mind because it's not like a perfect group okay so this is the signal I'm going to center the filter here I'm going to more or less adjust it to let all the energy of the signal inside the filter with little noise inside and I'm going to open the PSK inspector and the first thing I'm going to do is to adjust the filter more accurately because I adjusted it from the main spectrum but I can't go further so if I open the power spectrum I'm going to see all the frequencies that are getting inside the modulator so right now this signal can be actually centered I mean the centering of this signal can be improved so I'm going to move it here let's say something like this I'm going to give more space to this and yeah over there that's good then I'm going to reduce the bandwidth of the filter because I don't need that much bandwidth getting going to center this out there I'm going to reduce it even more great and over there that's right okay so I'm already discarding most of the noise coming from other parts I mean from the surrounding frequencies okay I'm getting what is called getting rid most of the off-band noise so the next thing I can do with this signal is try to estimate the the bot rate if I can estimate I see that it's already guessing numbers that are very close together it's around 1200 bits per second or actually symbols per second that's great I mean that's and you see you see a number that is close to a round number like I don't know something with many zeros in the end that probably means that you are very close to the right rate so I'm going to inject this estimation here it really appears here because I clicked on a pipe of course I can set this value manually and the next thing I'm going to do is try to guess the number of phases in this phase modulated signal so one way to do this is by performing something called signal exponentiation the signal exponentiation thing is related to the fact that if I take the samples of the signal calculate the nth power of the signal and then calculate its spectrum if the power of it's right if the exponent of the exponentiation is exactly the number of phases all of them are going to be rotated and going to appear with the same phase this is what is going to happen when you do this is that the spectrum of the consultant signal is going to have a DC component a continuous component like a peak in the center if I see that it means that that's the number of phases in the signal so I perform signal exponentiation with two firsts with two firsts and I see that there are two peaks but none of them are actually in the center however if I perform the signal exponentiation to the fourth power I see this peak in the center so this means that there is probably four phases in the signal ok so I'm going to stop this click on QBSK because there are four phases set the automatic club recovery called Garner club recovery algorithm and I start to capture things and you can see that it's already synchronizing to the four phases of the signal I can improve the signal to noise ratio of the demodulated signal by adjusting the matched filter or in adjusting its factor parameter here intuitively and that's pretty much it if I go to simple stream I can start to convert these samples to digital data and I can play with the with the width of actually the stride of the of the drone data of the of the drone pixels to look for repeating patterns or or any kind of structure if I remember well there was there was a structure around 2400 symbols and yeah well you can see there are sometimes things like this yeah and here you can see that there's some kind of a structure here so this is usually good news so this is one of the things you can do as you can see the modulating signals is free straightforward I could even save this to a file or or I don't know just to stop it and and I don't know play with the zoom analyze these things more in details I mean every pixel here represents a measure of a measure of the face so I think it's very intuitive and easy to use anyways this is just from the perspective of pure signal analysis but that's not the only thing you can deal with it I mean you could even analyze the analyze bursty signals with for instance the the time to main capture and I don't know well many other things the other important feature of SUSCAN is possibly the panoramic spectrum panoramic spectrum as I mentioned earlier lets you inspect inspect large portions of the spectrum supported by your device by the frequency range of your device it's in here in this menu panoramic spectrum and it looks like this so I'm going to choose my hacker that is plugged right now with its default maximum separate and all default parameters and I'm going to touch anything anything of this and click on the start scan and as you click on the start scan is really going to jump or perform a frequency hopping and measure the spectrum in different points well measure the power in different points of the spectrum and I can zoom and navigate through through it like I don't know let's say that I want to see what's happening in the in the sharp and let's say I want to see what happens in the FM broadcast segment you can see many carriers here all of all of these are FM stations I can go up in the in the spectrum let's say to the to the one what's the 800 megahertz these are LTE signals in Spain okay you can see here they're working and if I go to the 900 spectra segment of the spectrum you can see GSM like 2G signals this is a very strong one and these broadband signal here is probably 3G okay these are the only the only segments assigned to these frequencies in Spain you can see that there are many other signals here for instance again you can see LTE in 1.8 gigahertz and again you can see 3G in in 2.1 and I think this well this looks a lot like maybe LTE but I'm not sure but the idea is that you can you know navigate the spectrum intuitively just by dragging and zooming and etc so that's pretty much it for the for the for the demo and we're back to the presentation okay so how does all of this work how how come is it so fast so well first we have to understand the architecture so Ziggziger is not a monolithic application it relies on SUSCAN which is again a library and SUSCAN at the same time depends on Ziggziger itself depends also on sub widgets which say library that contains all the 55 widgets used by Ziggziger like like the waterfall or the waveform widget or the constellation or wherever and it is that you could use these library directly with QT creator and create interfaces very intuitively from from this so again how come it is so fast well there are three keys that that makes Ziggziger have a good performance compared to GQRX well actually the first one is that it's not based on GQRX radio so all the overhead of GQRX radio is not there but again the most important thing is that it's based on the FFT based channelization through FFTW3 which is a very very good library to perform fast forward transforms and in real time it also has this worker approach I came up with which ended up working pretty well and there are no blocks at all okay so the concurrency problems we had are gone however there's still a concurrency bottleneck at the end of the inspector workers that has to I mean all of them have to wait to have finished with their batches before continuing with the filtering so this is one thing that has to be fixed but apart from that is mostly log free and there are other important aspects like Qtify Qtify is extremely fast this is very fast at drawing things so it makes the interaction with the interface very smooth and the important fraction of the analyzer API is synchronous it's most of it message based and okay workers I've been talking about workers earlier workers are just threads with callback queues a worker is a thread in which I can push callbacks with code well of course callbacks are put into functions implementing something and some kind of task and that task is going to be executed in the context of that worker thread okay in a first in first oh fashion like in a pipe and they are very simple they are one of the most simple yet most effective objects in sector in particular is just it consists of two message queues plus a method that pushes the callbacks in the queue it owns a message queue which is the input message queue which receives the callbacks and also has a pointer to an output message queue which is used to store the results of the of the callbacks passed to the worker these workers I mean this worker abstraction is used to implement three kinds of workers the slow worker the source worker and the specter worker I want to detail later and these workers interact with the well they use to interact with these workers through the analyzer API of course the user doesn't know anything about the workers but they are there and they are usually in charge of executing some of the requests performed to the API in particular most of the direct API operations I think the ones you perform by calling methods directly are sent to the slow worker the slow worker is the one in charge of performing operations that may be blocking and produce problems with with the interface in the sense that it could freeze it for a very short amounts of time but yet the user can feel them and give the user the impression that the application is going slow for some reason and then you have the messaging API which is most asynchronous it's message message passing based that are most of them sent to the inspector workers more in detail the slow worker forwards the request to the signal source and waits for the signal source to process the request and at the same time the signal source delivers samples in real time to the source worker the source worker is in charge of channelization plus delivering spectrum updates such as messages errors and sending tasks to the inspector workers like modulation calculation of estimators of parameters spectrums like the ones we saw earlier when we click on the exposition or the power spectrum of a given channel etc. In a typical configuration there are if your computer has n cores you are going to have n minus one inspector worker so you have one core for the source worker and the rest of course dedicated for inspector workers of course if your computer only has one core all of them are going to be in the same in the same core it's going to attempt to do that to create as many workers as free CPUs you have and again these these workers have a fold-up of the iteration with the API like receiving requests from the user or sending PSV estimator data samples whatever. Now more in detail the channel inspector the abstraction behind this big ring button we clicked when I wanted to inspect a channel. It is that this inspector object represents just a channel being analyzed it's actually a real-time configurable modulator which has multiple specializations okay there are many derived classes from it like you have one for PSK signals another one for FSK signals you have a raw inspector that doesn't do anything it just takes the samples and forwards them to the user and also a audio inspector used to listen to analog radio broadcast signals with SIGDIGO. And well all of the worker all of the channel inspector work is divided in batches of samples that are processed by something called loops. The most important loop is of course the modulator well the sample loop which is actually the modulator loop that depends of course on these things being analyzed for digital modulations like PSK, FSK or ASK you have something like this like in PSK you have a chain consisting of an ATC a costus loop match filter club recovery and optional equalizer the FSK is like the PSK with the difference of using a quadrature the modulator which performs something like a derivative directly in the exponent of the signal so you don't have problems with the amplitude of the signal at all and an ASK the modulator that depends well that has most of the shares structure with PSK but however instead of a costus loop for club recovery has a PLL that's going to track to the carrier of the AM signal if any at all. So okay now I'm almost finishing with all this this software is far from being complete okay there are many open fronts maybe one of the most interesting ones what the most interesting one is the RPC like remote analyzer feature which is going to let you use a digger when you have the radios or the computer performing actually performing the modulation somewhere else so this is important because I mean in my case the motivation for this feature was that here in Madrid receiving HF signals is very hard there's a lot of interference lot of noise of impulsive noise coming from switch the power supplies so my plan was to leave a computer with a radio in a village in the northwest of Spain in Galicia for instance at my parents apartment and access them access it remotely using using a signature the idea is to well behind behind the the scenes okay these RPC like feature which is right now partially implemented works by serializing messages of the message of the messaging based message passing based interface they are being sorry I'm saying serialized using seabor the idea of using seabor was by Jeff Cipet which by the way this guy a few interesting well few not so few some interesting features to see digger ready I'm very actually thank you for this because if these seabor approach works very well it's very fast it's very compact representation of the of these messages it needs to exchange but also other things that can improve the overall or there are many other things that can improve the the overall performance of of signal like for instance removing barriers and use buffer pools instead to make the the modulation of signals the concordion of signals somewhat faster so you don't have to wait for for other inspectors in case that their computer runs slow in that particular moment for anyways so you're going to reduce the the let's say the possibility of losing samples because of some sudden peak of CPU load your computer and what then small things like embedding so PSDR modules in the Mac OS bundle rethink the analyzer to make it to improve its its current design to make it more more solid in in subword design words and thinking about the terms of interface like I don't know a web interface or even a mobile interface another interesting feature I have in mind is adding a TLB based doctor correction again I've been discussing this with with Jeff CPEC and we have a few interesting ideas but we didn't we didn't go any farther in this but this is definitely in the back lab and will be implemented a sort of later because one of the things I you can use C digger for is to receive low-worth orbit satellites like method logic method logic methodological satellites like the meat and the racham meteor satellites and those ones since the orbit is very close to earth have a very high orbital speed and the Doppler you measure in the ground is very high and is going to very very fast in particular if you draw the Doppler in the 15 minutes it takes to cross the sky we're going to see a sigmoid like graph that is going to take you out of the of the filter and is going to be very difficult for you to to demodulate it and I don't know then things like integrating digital digital analysis like the one like like we did with at the analog part of the signal doing this with with the digital bits of course to go farther extracting information from from the symbols we extract from the from the signal and then a Poggle Poggle inspector as well this is something required by signals like a PT again from weather satellites and also for FM which you have a signal that is modulated twice okay for instance in the case of a FM if you want to extract the RDS information of the station the RDS signal is a VPS case signal inside of the modulated FM signal so you have to demodulate first using the FM the modulator the signal and then you have to select the the the RDS carrier after after the modulation okay I was thinking about using something like SDR sharp slicing feature but however this is going to require more thinking from my part so there is nothing sure about this feature and then I don't know many other features that are more obscure but are still interesting so I'm about to to wrap this up before finishing I just wanted to say that I wanted to give you the the an overall well they give you the big picture of this project not only potential users of it because I know that analyzing signals is not for everyone and it's a very niche thing however if you are one of those hackers in the old school meaning of it like that kind of person that loves challenges and good programming or something different I think that SIGDIGAR may be your project because SIGDIGAR right now is a very young project and there are many open fronts and you need to mix many cases math with good programming good software design even physics or telecommunications and you may learn a lot in the process so if you feel like brave enough to to use it find something or find something weird and fix it or just feel like you're feeling like implementing something that you had in mind for for some time just feel free to ask me or don't even ask me do some kind of proof of concept and contact me whenever you want so yeah that's that's pretty much it well I forgot to mention this software is gpl3 or at least most of it because I think there are parts that are maybe bsd so this has to be I mean I think this is probably described in the copying files but apart from that yeah it's it's just free software with with a small license in detail depending on the specific license of every module and that's that's it I think thank you very much I think the talk went well at least compared to the previous one and well especially to all of the people that helped me out with the project especially to all of these guys and thank you very much for for attending to these two stuff bye
SigDigger is a free digital signal analyzer with an intuitive Qt5 interface, originally designed for GNU/Linux but that has been successfully ported to macOS as well. In this talk, I will give a brief introduction to SigDigger, what are its use cases, and why it is a reasonable option with respect to existing alternatives. In order to better illustrate these use cases, I will perform a live demonstration of its features and capabilities and give some real-time performance figures. I will then describe its internals, how the demodulator pipeline is implemented, why it is so fast, and why it could be even faster. I will finish with the mid-term goals of the project, WIP, and a request for collaboration for anyone who could be interested in making this software grow.
10.5446/53322 (DOI)
Hello everyone and welcome to the SSLTE project update at Fastem 2021. If this was a normal year, we would all be sitting in Fastem suffering from severe headache because as usually we completely overestimated our tricking abilities and underestimated the urgent beer last night. But instead we are all at home sitting on the couch relaxing and watching the free software radio death room. So that doesn't sound too bad after all does it. So let's get started and check out what we have been working on in 2020 and what we have for you in 2021. For those of you who don't know us yet, SSLTE is a software implementation of the 4G and soon 5G protocol stacks for SDR. With SSLTE you can build a full end-to-end LTE network using open source software. Please check out our project website SSLTE.com to find out more. We are very proud to have a very broad and diverse user base. But certainly among the biggest group of users are security researchers and academics in general. This is the web page of the GMA which is the association of network operators. And on their web page they are listing on the or inside the mobile security hall of fame they are listing security vulnerabilities that have been disclosed over over the recent years. And in 2019 and 2020 as we can see out of seven CVDs five or at least five have been using SSLTE. So that really shows how widespread the usage of SSLTE among security researchers is. And in general on our website as well we are collecting research papers that we know have been using SSLTE and so far we have collected over 200 such papers and numbers are increasing. So for the remainder of the talk I am going to talk about the highlights of 2020. So we will be looking at the features that we have added last year. I will provide a sneak preview on the features that we expect for 2021. And I will also update you and give some more details about our continuous integration and quality insurance efforts. So 2020 was the first year to see two major releases of SSLTE. For those of you who remember we previously had a three month release cycle but that wasn't sustainable and caused quite some headache especially in the summer for us. So we eventually decided to go to a six month release cycle similar to Ubuntu. So 2004 was the first one. And in 2004 we primarily added carrier aggregation support to SSNOB. And that means that you cannot only have two or three cells in a single NUB but you can also aggregate them so that a single U is connected to a primary cell and gets secondary cell and can aggregate the throughput. And in this release we also added the complete side link file error that allows UEs to, even though this is not supported in the upper layers, but the file error support is there, that allows UEs to communicate with each other instead of having to go through NUB. And that's very interesting for instance for vehicle networks. And we've also open sourced an NBIT file layer implementation that we have previously in another repository but we cleaned that up and then open sourced that and provided that to to the public. And then a step towards preparation of the next release was the integration of a new S1AP packing and packing library and that allows us to also regenerate our packing and packing code when new releases become available by just loading and generating the new code from asin1 files. In that release we also needed to provide two hotfix releases so 20.04.2 is actually the last 20.04 release in which we just fixed some bugs that we discovered shortly after the release. And then the second release of 2010 was 2010. The second release of 2020 was 20.10 in which we primarily added mobility support to ASUS E-NOTB. So mobility handover allows users to without getting dropped roam between E-NOTBs or cells of an E-NOTB because in the case of intro E-NOTB handover for instance the handover happens between two sets of the same E-NOTB. And this is something that is obviously very important for serious deployments of an E-NOTB. And it not only we not only support handover between cells of the same E-NOTB but also between different E-NOTBs over the S1 interface. Furthermore we added a new locking framework to SSLTE that is now a lot more flexible. It allows different things for instance so we have a network sync and a file sync so we can output the logging also over network. Also there's more features in terms of lock formatting so there's now a JSON formator that can write JSON files that are easier to post process and stuff like that. In 2010 we also fixed the QAM 256 support. That now allows UEs to connect and the E-NOTB will enable QAM 256 in the downlink if the UES support set and that allows larger downlink rates. So just to give you an example for a 20 mega at E-Cell the maximum throughput has been increased from or can be increased with QAM 256 from 75 megabits up to 98 megabits so quite a bit. And also this release we added some initial nr file layer components and updated stack components that we've done earlier. And the last stable release is actually 2010.1 in which we also fixed the last minute bug that has been introduced shortly after the release. Another main aspect in 2020 for us really has been documentation. So we have updated pretty much every bit of our documentation for all applications throughout the entire website which we host on docs.slt.com. So with new application notes for instance there is an application note that describes how to attach cut sensors so Android phones to the E-NOTB and to do troubleshooting bug fixing. Not bug fixing but troubleshooting and and finding the optimal parameter set for benefiting from optimal performance. Also there is a new application note from the mobility feature that we've added this year that describes how to perform handover between two E-NOTBs that are running on a machine connected RF connected over CRMQ and having radio in between them as a broker and as a channel combiner so to say that allows us to use radio companion and the slider to change the gains between the cells and then force the UE to measure a weaker cell and a stronger cell to send measurement reports and then to force handover from one cell to the other. That's very useful and Brandon has done a great job and we've also updated and provide radio companion scripts and things like that so for you guys to to play with that and to use that and get up and running quicker. So I really encourage you to look at docs.slt.com and check out what's available there. Before we come to the 5G section of the talk I'd like to quickly demonstrate carrier aggregation one of the main features added in 2020 to SSE-NOTB and for that we're going to use CRMQ so our NORF library to connect E-NOTB and UE running on the same machine all over a network to each other and in this console split here I've prepared in the bottom left console as there's EPC which I'm going to launch now and in the top left console I'm going to launch SSE-NOTB with a four carrier configuration with each carrier having a bandwidth of 20 megahertz so 100 resource blocks in LTE and when we start the NORB we'll see the channels are being printed to the CRMQ configuration and the ports that are used so you see that it's all running on local host and after that the carriers are printed with the configuration of our 100 PRBs for each of the four carriers and we will start the console trace so as soon as the UE connects we will have the UE showing up here on the right hand side I'm going to start ping obviously we're not attached yet so the ping is not going to be successful just now but once the UE is attached we should see a successful ping and in the top right console I'm going to start the UE with a similar configuration than the E-NOTB with a similar configuration like the E-NOTB again with a four carrier setup this time the UE is announcing release or the UE download category 40 which means that it supports 256 quads so we should have downloading rates for one of the carriers or for each carrier of up to 100 megabits or close to 100 megabits for 100 PRB cell and then I will enable the critical use interface now I'm starting at similar printing configurations and we attach this and yeah we also see the ping going through now this time we just see the constellation here and it's as you can see only using QPSK as modulation scheme because the bitrate is very low and the MCS that has been allocated by the E-NOTB is also very low if we now stop the ping but instead start IPERF you see that the constellation screen is changing and it's also something that we can observe here the average MCS is 19 for that bitrate and also gather we are achieving a rate of yeah close to 17 megabit we now further increase the rate generated with IPERF we can push the throughput to the limits of that configuration as I said we have four carriers each having 100 PRB so the maximum theoretical downloading rate is around 98 megabits per second and with all four carriers being used we can see here that we are getting close to 400 megabits here and that particular scenario and the modulation coding scheme is maximized and we can see the 256 quam in the constellation diagram here as well okay that's it for the demo let's continue with the slides and I guess before we come to 2021 I think we should also look at COVID-19 I guess any 2020 recap cannot be without a COVID-19 analysis but I must say that from a project and company perspective we were really lucky in that we in principle have a very deep remote working philosophy and and nothing really really changed in that matter of course we all missing going to the office and then having coffee together but that's something that we are prepared for and have always been prepared for working wise and we have made many commercial deliveries and and also successful commercial deliveries and also two public releases that you know 2004 and 2010 but of course the individual impact has been different for depending on the current situation the family situation and just to give you an example of an anonymous random developer with kids obviously was impacted in different ways in another way than a single person junior developer perhaps without such dependencies and so that's an anonymous developer that has kids at home and that needed to be schooled from home suffered from from severe productivity reduction and and then obviously also weekends disappeared and started to disappear basically when the lockdown started and that's something that Github is also very useful for because you can actually see how this how this looks like so when we look in the good times January, February there was still actually gray areas in that chart there so those are weekends and then when the lockdown started they started to disappear and as soon as schools opened again they they came back so that's actually it's quite interesting to to see and you know study covid by looking at the Github chart there from that random developer but we hope obviously that this doesn't stay like this in 2021 but now let's let's really have a look at what's up next so the features for 2021 for SSLTE can really be categorized into two main sections one big aspect will be the extension of our 4G E-NOTB and what we are having in mind there is to really turn the 4G E-NOTB and turn that into a K-RECREATE E-NOTB and what that really means I will describe in a second so moving from a tool that is very useful for researchers for testing purposes and turn that into something that people can actually deploy and the second big aspect and focus of 2021 will of course be 5G and in the beginning this will be the UE so that will be our first drop that we expect in the upcoming release already and later on this year there will also be a G-NOTB or E-NOTB G-NOTB so NSA support end-to-end on the radio access network from SRS by K-RECREATE people usually refer to major robust and stable systems for SS-E-NOTB that means high performance multi-UET tested E-NOTB for that purpose we have added a new proportional affair scheduler that soon will be extended with a frequency selective scheduling algorithm we've also added quality of service support and now handle voice over AD calls Aplique power control has also been added that allows UEs that are close to the cell or close to the base station to reduce their transmit power in order to save battery and last but not least we've also extended the network management and operation functions network operation center can now connect to the E-NOTB and derive advanced KPIs on a per cell and per user basis also error handling functionality notifications and alarms have been added altogether that brings our E-NOTB much closer to a carrier create solution we're very excited to release the first FOSS 5G NSA UE in SS-ATE 2104 but first let's recap what non-center loan actually means a UE that has non-center loan capabilities will first establish a normal ATE connection to a normal ATE cell that cell is the so-called anchor cell when the cell detects that the UE has an R capabilities it will allow and inform the UE about any NR cells in its vicinity and if measurements have been sent to the base station the base station may decide to add an NR carrier to an existing ATE carrier the control plane data for that UE will remain on the 4G site and only data traffic will go over NR the next public release we will add initial 5G NSA support to SS-UE this release will be compatible and tested with Amarisoft E-NOTB we support a selected set of configuration parameters for example the channel bandwidth of both the LTE anchor carrier and the NR carrier need to be the same this is due to a currently mutation our radio architecture but will be addressed in one of the upcoming releases for 5G NR-5 for x86 we will initially be focusing on download channel processing of course we will have to add some uplink bits as well for initial attach and to acknowledge download reception our NR coding is currently AVX2 optimized but the threading architecture isn't currently so data rates won't be great in the beginning on the L2 as three feature site we will attach we will support a full attach to a 5G-NOTB and core network and we will establish a so-called secondary cell group barrier for an R over which we carry out data communication and all that will be initially available for 2104 for the second release in October we we are planning to add initial 5G NSA support to SS-UE we will test the E-NOTB against our own UE SS-UE and try besides using Amarisoft to also use open 5G S on the core network site similar limitations will apply to SS-UE initially so it will be a selected set of configuration parameters essentially a replication of DUE features in the core side of things we of course allow to attach to NSA enhanced core and will support all signaling on the RAN side of things it will be very similar to SS-E-NOTB right now instead of being able to create a E-NOTB with multiple SD cells we will use a single binary to create different cells of with different radio access technologies so an LTE anchor cell and an R cell the interface between the E-NOTB and the G-NOTB will initially be shortcut with just plain function calls later on we are planning to extend and implement the actual interfaces the E-NOTB will allow the UE to send and send UE capabilities and E-NOTB will also instruct the UE to take measurements and then we'll if those measurements are successful and the sufficiently strong cell has been found we'll instruct the UE to establish a secondary cell group error with which we can carry out data communication all this is very exciting and we're really looking forward to provide you some real 5G in 2021 let's now have a look on our ongoing quality insurance and testing efforts our continuous integration system currently runs around 900 unique unit tests we do builds for x86 64 bit RM32 bit and 64 bit and power pc 60 4 bit for GCC but very in various versions and clang also in various versions according to LGTM one of the static code analysis tools that we are using we have been able to increase our code quality from E from around two years ago to now A plus which is a quite significant improvement these days one of the main tools we use for testing is our RF continuous integration or short RFCI for that purpose we have adopted Osmo G's M tester or short OGT which is an Osmo-com project together with Sysmo-com we've extended OGT and added 4G support OGT is essentially a python-based application that can be used to orchestrate and control radio equipment and to write configurable test cases that either run very basic tests such as attaching a UE to E-NOTB or run more complicated and more complex tests such as mobility or carry aggregation scenarios those tests are executed at different levels of the development cycle within SRS shorter tests are executed for each pull request and longer tests are running each night and even on weekends on the current development branch those supports external hardware such as attenuators or channel emulators we use OGT to execute tests using actual RF hardware but also with setMQ to run multiple jobs in parallel we are even utilizing Knuradio for channel modeling and mobility simulations very recently our two interns Nils and Bertrand have added Android support to Osmo G's M tester with that we can even use Android Cots handsets in our tests with SSE-NOTB and we're also integrating customer hardware into the setup and control that with OGT and we are currently working on the integration of an proprietary baseband unit and remote radio head that runs our L2L3 software on top of a proprietary file and this will also be controlled from OGT and we execute tests and collect the results and post-process those in a similar way as we do with normal Cots SDR equipment and in the future we're also planning to integrate O-RAN, BBU's and similar hardware and use them the same way so this is a typical general purpose testing setup that we use at SRS so in this particular case it's two U-SURPs, two B210s, cave it up together and with a variable and ten-year sitting in between and the same setup is used by OGT for continuous integration so OGT locks the resources and Jenkins and runs the tests and if OGT doesn't run them they can be also locked by our developers so we can SSH into the machines and run manual tests on those and we have similar setups also with X-310s for carry aggregation or multi-cell experiments that we can use either remotely or the continuous integration occupies them and that's really it I'd like to thank you all for listening and watching and I would also like to thank our public funding sources and DARPA, ISA, the European Space Agency and the European Union through various Horizon 2020 projects I'm happy to take further questions either now or later in the chat and you can also reach out to me via email or on Twitter and I hope to see you all at FOSSTEM 2022 at megahertz one which is not there right now so they have not been uploaded can you see that Nicholas? Yes so I started with the 5G interest probably and then yeah okay now I am now alive and there is a bit of the last questions that we're having in the backstage so thank you Albrecht for your presentation was very interesting and I can see that there is a bunch of interest in 5G so the question is are you implementing FDD or TDD and the 5G NSA Huey? Yeah and I already answered this so right now we are looking at BAND N78 which is the one that is mostly used by operators in Europe and this is a TDD band so depending on the allocation of the operator this is 50 up to 100 megahertz depending on the country depending on the operator and then we will be focusing on that but there was not a question regarding the bandwidth so 100 megahertz is very challenging and then we will definitely not do that in the beginning and also I said that we have a limitation in the beginning in the first drop which is that the second carrier needs to have the same bandwidth than the primary carrier so that means we will limit it to 20 megahertz max likely only 10 but this is only because the way we are interfacing the radio so the signal processing is totally prepared for up to 100 megahertz it's just that we don't for the user for instance don't have independent streamers so we are not able to sample at 10 megahertz for the AT carrier and at the same time with 40 or 50 or 80 on the NR carrier once we have overcome that from the architecture point of view and interface with the SDR properly we we can do any bandwidth on the NR carrier
This talk provides an update on the srsLTE project. We'll look at the two past releases in 2020 and, more importantly, provide an outlook on the two upcoming releases for 2021 which will include 5G NSA support.
10.5446/53326 (DOI)
Hi, I'm Simon Willison and I'm going to be introducing you to an open source project I've been working on for the past three years called Dataset. So Dataset is an open source multi-tool for exploring and publishing data, but I feel like the best way to explain what that means is to show you a demo. This is my dog, this is Cleo, and Cleo is a proud San Francisco dog. The question I want to answer is what is Cleo's favorite coffee shop? So the way I'm going to do that is using Dataset. I'm running Dataset here against a database of my swarm check-ins. I use Foursquare Swarm and I check into different places. And this is relatively unexciting, this is a database table. But every time I check in with Cleo, I use the Wolf emoji as part of my check-in message. This is an alternative view onto that table which joins it against venues to pull in things like latitude and longitude, and then demonstrates a Dataset plugin called Dataset ClusterMap, which shows me those points on a map. Now, since I know that check-ins with Cleo and have the Wolf emoji, I can filter this and say I want everything where the shout contains that Wolf emoji. And this filters down all of my check-ins to just the 359 check-ins that I made with Cleo. This is Cleo's own check-in database. I get a map of all the places she likes, but Dataset supports faceting. So if I'm interested in what categories of places she goes to, I can choose to facet by venue category. Now I can see that unsurprisingly, she likes parks and dog runs, but she has been to 25 coffee shops. I'm going to click on Coffee Shop to filter down to those 25 check-ins. And then I'm going to facet by venue name to see which are the most common venue coffee shop names that she checks into. You can see that she's mainly a blue bottle girl. She's been to blue bottles 16 times, but she does occasionally go to Starbucks. An important thing about Dataset is that everything that you can see in the interface, you can get out as raw data. If I click the JSON link here, this is a JSON feed of all of Cleo's coffee shop check-ins. I can click CSV if I want to pull that data out and import it into something like Excel. So super useful software for categorizing your dog's different coffee shop visits. So let's go for a slightly more serious demo. The New York Times published on a daily basis, they published the latest COVID-19 numbers for different US states and counties. And so I run a Dataset instance publicly available, which takes this data and loads it into Dataset for analysis. So here is that New York Times CSV file that they're publishing of different counties. But in this case, I've got some preconfigured facets. I'm going to drill down to just the numbers they've reported for California. Where's California? Here we go. And within California, I'm going to drill down into San Francisco County where I live. So now I have 351 rows for COVID case numbers in California. I have another plug-in here called Dataset Vega, which lets me plot those on a line chart. So I'm going to a line chart where I plot the date as a date time against those numbers of cases. And we can see a very, very unhealthy looking curve going up through the last few months. But then since this is a database filter interface, I can say actually give me Los Angeles County and I can see their graph, which is even more intimidating. This is all built on top of relational database tables, which means you can do useful things like joins. So here's a table I pulled in of the US Census County populations indexed against these things called FIPS codes. You'll notice that FIPS codes are also available in the county's data for the New York Times. So if I join those together, I can get a number of deaths per million and cases per million by looking at the cases number compared to the county populations. So you can see that the worst counties in terms of cases per million are Crowley, Colorado, Dewey, South Dakota. I often find when I dig into this that what's actually going on here is you've got a county with a large prison there and a relatively small other population. And those tend to be where the really bad coronavirus hotspots are when you're looking at cases per head of population. So this is what Dataset is. Dataset is a web application for serving any type of data that you can cram into a relational database in a way that lets people interact with and explore it, but also gives people export options, like a way to export that data out. And incidentally, this is another example of a Dataset plugin. I have a thing called copyable, which is a plugin that gives you data that you can copy and paste in different formats. Like if you needed a latex table of cases per million, you can click up here and copy and paste this and process it that way. So a lot of the work I do with Dataset is around building these new types of plugins. But the secret source that makes all of this work is a database called SQLite. If you're watching this talk, you probably use SQLite every single day, even if you don't realize it, because SQLite is an embedded database that's built into Android, built into iOS, it's used by Windows and Mac OS. Huge numbers of desktop applications use SQLite as their underlying storage engine. And the characteristics that really appeal to me about SQLite are, firstly, it's very small and very fast. More importantly, a SQLite database is a single binary file. It's a something.db file which sits on disk, which makes it very easy to work with these files, create them, upload them places, move them around. And Dataset is built entirely on top of a sort of foundation of SQLite. And the reason I, the way I came to this idea actually started over 10 years ago when I was working at the Guardian newspaper in London. So at the Guardian, we collected a huge amount of information about the world because we published graphs and maps and infographics. And after I joined the newspaper, I found that the reporter who collected so much of this data had been beautifully indexing it and keeping it in Excel files on a desktop computer underneath his desk. And so we got together and started talking about ways that we could share this raw data more widely. We ended up launching a thing called the data block, where the idea was that any time the Guardian published a story based on data, we would publish that data online as well. The way we did that was using Google Sheets because it was free and it was easy to start using. So I always felt there needed to be a diff, there should be a better way of publishing the sort of mostly static data. Three years ago, I started looking at serverless hosting, things like Google Cloud Run and the cell and realized that there's this really interesting thing with these hosting providers where they are extremely inexpensive for small projects provided you don't need a relational database. If you need a database, that requires like persisting new data and maintaining backups and so on, that tends to be something that they charge extra for. But for my use case, where I just want to publish data, I don't need a full MySQL or Postgres that's accepting writes. I just need a read-only relational database and SQLite, it turns out, is perfect for that. I can bundle my data up into a.db file and deploy it as part of the Docker container that contains the rest of the application code. So the sort of core idea with data set originally was deploying that application along with the code that it's going to present. I call this the baked data architectural pattern. So let's do another demo. Let's actually build something. The city of San Francisco have an open data portal and my favorite file on this portal is their street tree list. This is a CSV file containing 195,000 trees. I've got a copy of that here. You can see it really is. It's a CSV file and for each tree, it's got the species and the address and the latitude and longitude and all of this really rich data. So another tool that I've been working on is called SQLite Utils and it's a tool for manipulating these SQLite databases. What I'm going to do is I'm going to use SQLite Utils to insert that CSV file of trees into my into a database file. So I'm inserting into a file called trees.db in the trees table and this will churn away for a few seconds, run through all 195,000 of those CSV rows and load them into a SQLite database file ready for me to start exploring with dataset. Both SQLite Utils and dataset are available in homebrew. So if you type brew install dataset SQLite Utils on my computer, it will tell me that I've already got them installed, but on your computer, it should install those dependencies for you and they're written in Python. So anyway, you can run Python, you should be able to run these tools. Okay, so I now have a file called trees.db, it's a 50 megabyte file full of trees. I'm going to run dataset trees.db, start this up on my laptop and take a look. And here it is. This is that CSV file in tabular format. I'm going to try and get these on a map. So if I run dataset install dataset cluster map, this will install my cluster map plugins I showed you earlier. Now when I run dataset again, because this data has latitude and longitude columns, it will show me all of that data on a map. So this right here is a map of tree, you can see that 16 of those trees have incorrect latitude and longitudes on them. But you know what, let's go a step further. Looking at this data, there are a lot of duplicate values like QLUG legal status has a bunch of strings that look very similar to each other. SQLite Utils has a tool for dealing with that. I can say SQLite Utils extract trees.db trees and then the name of that column. So let's do Q legal status. And the library will extract that into a separate database table. So when I refresh this page, you can see that it's still got that data, but it's now hyperlinked through to another record. This is the Q legal status record. In fact, I've got an entire extra database table called Q legal status where, ooh, let's have a look at all of the significant trees. Turns out there are 1732 significant trees in San Francisco, which I can load up on a map, which is kind of cool. There's a significant tree right here on 1449 Lake Street. I love playing with this data because you always find something new every time you duck into it. So we can run SQLite extract on a few other columns. I'm going to do it on Q caretaker and Q species, I think. The other thing that would be fun is if we could search this. You'll notice that the Q address column has the names of streets in it. So I'm going to run a command SQLite Utils, Utils enable FT S on trees.db for the trees. I want to enable it on that Q address column. So if I start data set up again, see the result of our changes here are that Q species is now highlighted, Q caretaker is now a link as well, and at the top of the page I now get a search box. So if I want search for, say, Grove Street in San Francisco, here is a map of all of the trees on Grove Street in a nice little line along that street right here. And I can go a step further and say, you know, let's facet by the, I'm going to say, let's facet by the species and see that on Grove Street, the London plain is the most common tree with 189 of them. And there are 39 cherry plums. Again, if I want to JSON file of just the cherry plums on Grove Street, I can do that here. Because SQLite Utils, because SQLite is a SQL database, you can actually execute SQL queries directly against the database using that data set, using the data set interface, which is safe because the database is opened in read only mode, and I have a time limit on how long your SQL query can execute for. This actually means that you can write applications where you execute SQL from your JavaScript, pull back the results in JSON, and render them on a page. There's a ton of interesting things you can do around that. Let's do one last thing. I'm going to publish this data to the internet using a feature of data set called the data set publish command. So data set publish, let's me publish two different hosting providers, Cloud Run and Heroku and so on. I'm going to say data publish Cloud Run. And in this case, I can feed it a database file. So data set publish Cloud Run, trees.db, I'll set a title of San Francisco trees, and I'm going to deploy it to a Cloud Run service called SF trees. So I hit enter on this, and this command will now build a Docker container for data set with the trees.db file in it. It'll upload that file to Google's cloud infrastructure, where it will build it as a Docker image. Once built, it's going to deploy it to the internet and give me back a URL. So this will probably take around about 30 seconds, and in 30 seconds time, that database that I just created from that regular CSV file by running a few commands will be available on the internet for anyone else who wants to to start interacting with it. I'll leave that running in the background and talk a little bit about the rest of the project. So the data set website at data set.io has details, not just a data set, but of all of these other tools that I've been building around it. So I've now got 51 data set plugins that add functionality from the map visualizations, graph, QL, API, different options for authentication, all sorts of bits and pieces as plugins that you can pick and choose what's useful for your project. I also have a collection of tools I've been building for working with SQLite databases. So things like loading in CSVs, loading data from Postgres or MySQL into SQLite, interacting with APIs, the GitHub API or Apple HealthKit databases. These are all things I've been building to try and increase that universe of things which you can load into SQLite so that you can use them with data set. Switching back here, it looks like we're nearly done with setting an IAM policy and routing traffic. And so in a few more seconds, we should get a URL which we can visit to see our list of trees online. Here we go. So this right here is that database of trees. It's available online. Anyone can go and visit it. It has the JSON API. People can run SQL queries against it. You'll note it's missing the map. And that's because when I run data set publish, I forgot to specify that I'd like to install the, I want to install the data set cluster map plugin. So if I leave that to run for a few more seconds, I'll have a new deployment of this that has that plugin installed as well. If I've piqued your interest, the place to go first is data set.io, which incidentally is itself just the, this is the data set application running. If you go to slash content, you'll see the databases that are powering this website. And then this right here is a custom template. Data set supports custom templates and custom CSS, which means that you can use it as a framework for building entire websites. This site also demonstrates another tool I've been building for building search engines. So if I search for CSS, this will use a data set plugin to search across multiple tables and show me results about CSS and the documentation, the plugins I've built, blog entries, all sorts of bits and pieces. So take a look at this. The use case page explains some of the different things you can use data set for. And if you want to talk to me about this, I'm going to be at FOSDEM. So please feel free to get in touch with me there. You can also grab a slot on my calendar, and I'm happy to do a one-to-one video conversation with you talking about data set and trying to understand what kind of problems you have that might be solvable using the software that I built. So thank you very much for your time. And yeah, if you have any questions, please feel free to reach out to me.
Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size, analyze and explore it, and publish it as an interactive website and accompanying API. Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is part of a wider ecosystem of tools and plugins dedicated to making working with structured data as productive as possible. I’ll use this talk to introduce Datasette and show how it can be used to quickly explore and publish data. I’ll talk about why SQLite is an ideal publishing format for structured data and demonstrate several open source tools for converting data from different sources into SQLite databases for use with Datasette. Datasette is the foundation of my Dogsheep personal data warehouse project. I’ll show how I’ve pulled together data about my own life from a wide range of sources - from Twitter and GitHub to 23AndMe and Apple Health - and used it to build my own data warehouse to help me answer questions about myself.
10.5446/53331 (DOI)
you Hi, my name is Maya and I'll be talking to you today about how floss meets social science research and lives to tell the tale. So let me start with a quote. If I had asked people what they wanted, they would have said faster horses. This is a quote that is often attributed to Henry Ford and used to pinpoint users inability to convey their needs to innovators. This is the opposite of how open source development and open research attain growth and improvement by valuing and promoting user feedback to reach community oriented goals. As a user of floss tools for open research in social science myself. I'm excited to share some of my experiences with you today. I'll be talking specifically about a team project that I worked on to produce and analyze a Twitter follow graph of last year's foster and chaos con participants. The project used open source tools and agile management. Data was collected with a command line tool torque network of visualization was done with Kathy and from a geeks provided a collaborative framework for managing code data visualization and text. Implementing this project led to insight about what it takes for social scientists like me. Who have minimal technology or exposure to open source values to practice open research with floss tools. I'll specifically address three types of challenges I faced training documenting and collaborating. But first, let me tell you a little bit about my user background. So when I started working on this Twitter follow graph project. I had minimal experience with digital tools for research. I had been exposed to the very basics. Interacting with my computer's terminal understanding how to use version control with kit and executing code in Jupiter notebooks. I'd also practiced using some no code tools, such as Geffy heif and YouTube data tool, and had really liked it. I was eager to do more. Now for a bit about the project itself. I did a research project with the literature review. Most of the articles I came across were using proprietary all in one analytics tools to draw conclusions about data they were gathering from Twitter's API with what seemed like unfettered access. The developers in open science and the need for transparency in the methods we used to collect our data. The team and I agreed it was an ethical and methodological imperative to use open source tools. I knew I needed to harvest tweets, but I didn't know which tool to use for the job. I was using torque, a command line tool developed by documenting the now a community data activism project supporting the ethical collection use and preservation of social media. They're closely tied to the black lives matter movement. Truth be told, I'd initially intended to use guess we were a tool developed by science pose media lab. I was a developer in the first steps of installation. And at the time, I was anxious about getting the project started, and not even aware that there could be documentation or a friendly teammate that could help me solve the problem. Because I'd been successful at installing torque on my own personal computer a few days earlier. I simply redirected to the easiest available solution. This opening anecdote leads me to address the technical and value based challenges of being a floss tools user in a social science setting. So I'll start with some technical challenges. First off, I don't know how to code. This means that I didn't exploit torque to the fullest, or use its Python library programmatically. I rely on a third party to build a network I eventually imported into Gefi. Once I was in Gefi. The second challenge I had was with the community detection calculation, which caught me a bit by surprise. But when I moved to social network analysis, it didn't occur to me that this calculation would yield a different result each time it was launched. Unless I expressly told it to always start with the same node in the network. Once I realized this, I wrote everything down in our project wiki and learned an important lesson about documentation that I'll go into more detail about in a few minutes. One of the most important advantages is its real time visualization feature that lets you zoom in to get high granularity. And for me, the previous screen was blank. This was my third challenge. The bug fix I found on the forum wasn't effective. But thankfully, I could rely on the separate preview tab to visualize the network as a whole and zoom in, although not as smoothly. And then in the long run, this bug ended up being pretty frustrating, and it led the team to explore other alternatives. One of these is a different open source network visualization tool expressly developed for Twitter data. It's called Twitter Explorer. The usage is being very user friendly, but in its early stages of development, it did lack many of the functions that get provided. For instance, Twitter Explorer only has one layout option, whereas get he has several. Because we wanted to perform a somewhat in depth analysis of our network, I ended up sticking with get he for this project. But for other projects, Twitter Explorer was quite handy. The last technical challenge I'll bring up here has to do with complying with Twitter API's terms and services. Despite being stated in plain text on Torx website, it really wasn't clear to me what part of the data I collected could be shared with the world. Thanks to one of the follow up interviews with a participant that the issue came up and that we dehydrated quote unquote, our Twitter data to make it shareable, meaning we only provided a list of tweet IDs in the project's public for my git repository. Now, I'd like to address some value based challenges. So, number one, I had to learn to practice transparency, which is an integral part of open research on many levels. First, there's the transparency that pertains to the training and tools we use. As I mentioned earlier, learning how to use open source tools and integrating them into research methodologies seemed essential. Then there's the transparency that pertains to documentation. As I mentioned earlier, this project taught me the importance of documentation written by others as a crucial resource for troubleshooting moments, but also documenting my own processes as a way of ensuring reproducibility. Case in point improper documentation led me to experience difficulties reproducing community detection results during the visualization process in Gefi. Finally, there's the transparency that pertains to collaborating. I had to learn to shake the feelings of insecurity. I had about practicing research in a field that was new for me, and sharing a work in progress with actors from the community. Despite high level training and research, I'd never socialized any of my reflections processes, or even results outside of publishable material, such as literature reviews, or academic articles. The reflexivity that goes into practicing open research was intimidating at first, but I've come to appreciate it as a basis for better communication with team members, as well as with people outside the project. It's also an earnest way of engaging people in collaboration. This leads me to the next value based challenge. Two of them in fact collaborative participation and community development. Framagit is a great organ organizational organizational tool for handling collaboration. Despite being new to me, the collaborative framework and approach to labor it offers made it easy to chart out objectives assigned tasks and agree on bug solving mechanisms as a team. It also made internal planning easier versus implementing ad hoc solutions to solve problems as they occurred along the way, which also happened, but less. So you might be wondering what the challenge was then. Well, in my previous experience, much of the labor in research was hierarchically organized and collaboration was codified according to various factors, such as seniority to give just one example. So using tools that are designed for collaborative participation feels bewildering at first and challenges the proprietary feelings I was taught to harbor about my work. On the other hand, it made it easier to move away from top down approaches to research and present results as a common achievement. There's also an external dimension to collaborative participation that is crucial to making research processes and results, a public common good. And that is the possibility to evolve. Thanks to feedback from participants. Here we go. So this makes open research gain relevance and impact through community building and outreach, but it's not automatic. Time and resources need to be spent on building communities around projects like ones that we've heard about in the step room today. And this can only take place when the research goals and a developing communities goals are aligned. Our Twitter follow graph project that brought together five people from three different institutions was inspired by the desire to explore the ecosystem around floss tools and open science. And the idea is that floss tools, but also open source values will become increasingly integrated into research processes as the move towards open science gains more traction throughout Europe. We've done a few interviews with key actors from this network, which gave us some insight about how this world works on the inside. And we're to be done to take this exploration a step further into the open research tools and technologies community. Oops, got ahead of myself here. So this is why our team chose pub pub and the collaborative publication platform to draft an essay about our exploration of the open science open source ecosystem, or as the team calls it, the ozone ecosystem. The pub pub provides a place where team members can continue to interact over time, but also a place where others can find out more about the project and engage with it. There's also a blog post describing the project's preliminary findings and the public repository on from a git that I already mentioned. So, what does it take to. Open research, dear practice, there we go. Open research in social science today. In my experience, it takes using floss tools, but also getting acquainted with open source principles, such as transparency, exchange, collaborative participation, and community oriented development. And this building confidence through exposure and training, documenting your every move like your life depends on it, and fighting against cultural attitudes of insecurity about transparency around transparency about intuitive processes or unfinished work. I'd like to close with a reflection that comes from my background in Latin American cultural theory. And this concept that I find particularly useful for thinking about the way open science and floss development interact. It's called transculturation. The term was coined in the 1940s by a Cuban anthropologist who argued that when different cultures co habit, they end up influencing and changing each other mutually and end up creating a new culture all together. And this is what floss designers and engineers are doing together with social science researchers to create open research culture in and out of academia. Thank you. Okay, I think we're live now. So welcome everybody to the Q&A. Thank you, Maya, for being here with us today. So guys, if you can post your questions into the chat room, we will answer it as soon as they come out. I'll start with a few questions I have myself. First one will be, as you mentioned, you have some technical difficulties while doing your inquiry. Did you try to contact the developers of the tools you were having trouble with? That's a great question. No, that wasn't an option in my mind at the time. Being new to the culture, I didn't even cross my mind. Yeah, that's funny, but because actually there are a few tools that you can use to contact the devs and try to get help. Do you have more ideas now on how to do that? Yes, for sure. The first thing I wasn't aware of was the kind of evolution that floss tools go through. I was used to being presented with software as a package that was a finished product. So I didn't realize that there were things like GitHub where you could interact with people who are actually in the midst of developing new features. And if I'd known that, I think I would have felt at least going to read about it. And I think I would have needed maybe an extra push to interact with the people who are on the developing end. But now I know it's for real. So, I'm going to start with a question from Bajama who just presented. So are the academic publications published on your work? No, they're not published. As of not academically published. There's a blog post. There's a work in progress on PubPub which has not been published yet or open to the public. But I think we were waiting for this day to see what kind of reception and interest we could foment for the project. Okay. Yo, ask another question. Would you agree if I said that docs and non-techie users are often lacking or in tutorials guides assume too much knowledge? Like about the documentation you went through, do you think there can be a place for more effort to help non-techie users? In the actual wording of the documentation? If it concerns the actual wording of the documentation, I think it was when the previous talk was talking about recipes. And I've noticed that in GitHub there's a lot of recipes actually of how to use the tool. And I think a combined recipe plus documentation really does the trick. I hope that answers the question. Okay. Next question. You mentioned five collaborators are more or less in three institutions. What tools do you use for day-to-day collaboration, talking and sharing ideas regularly? We were using Framiguit, which is just an alternative to GitHub. That was the kind of the central tool that we were all collaborating on. And internally we were using Rocket Chat, which is just an alternative to Slack. Okay. So I will throw my question then. So yeah, you mentioned that you documented your process yourself. And what tool did you use to do that? I used the Wiki inside Framiguit. There's a Wiki function. And I just used it as kind of a diary, a research diary. And so every time I would work on a project, I would just try to compile everything that I had done there. Okay. I have two more questions. The first one is, do you now apply on what you're doing, the lessons you've learned in this process? Yes. What do I, what is it specifically that I do differently? I think I reach out more readily to people who are involved in the tools, in making the tools and using the tools other users. And I also look for new tools all the time and new ways of doing things now that I know where to look. Okay. Do you have any recommendations as a user for developers or to make things better for the past you? What would I do here for you if some days I would have done, I don't know, something. Do you have any ideas? Specifically for the developers, my advice would go probably more towards the research community. In terms of developers, I found a very open community that was kind of hidden to me before. I wouldn't have dared go towards a developer because of fear of lack of knowledge of how to talk about the same things together. But I think that actually applies more to the research communities that use tools and don't know how to interact about them. Okay. We have a new question. Did you get any pushback from your team or university when you published unfinished data? Did you ever seem against the university culture? Yes, of course. The advantage I had was that I was doing this as a student. So I didn't have a reputation to put on the line. I know that for established researchers, this is a huge question and there's a lot of pushback and fear about the unfinished nature of work. And so, yes, I think that would be an obstacle. Okay. New question. How do you explain that so few social scientists are sensitive to these values of openness and transparency? I'm just going on with the question. As you expose analytical tools used in social sciences, university that are mainly proprietary and which is astonishing compared to political involvement of academics in non-technical users. Yes, absolutely. Oh gosh, there's such a complex and deep question. I think. What do you have on this? What made it? My intuition would be to say that user interface has a lot to do with it. Because unfortunately, a lot of proprietary solutions are also very, very simple to use and they're also made more readily available. Like they're just pre-installed on machines that are accessible in all institutions, be they public or private institutions for research. So the question often doesn't even get posed. It's just what's already on the machine. Then if researchers were asked what tools they wanted to use, maybe that would have a bearing on which ones were used in the end. Also, if there were more technical training, I suppose, just in basics about how to use Floss tools, I think that would make more people more comfortable. And that's the short answer. Did you ever write an issue on a GitHub repository on a tool you've used? Yes, for a different project, not for this one specifically. How was your experience doing that? Great, I got an answer. And the tool changed. It evolved. It was great. It was very satisfying and encouraging. So we are about to close the Q&A session. Just take a few seconds. Thank you for being here and for your great talk. We are going to move to the next talk in a minute. Thank you.
This talk aims to give a user’s perspective on FLOSS tools for open research in social science. It will be based on personal experience with a team project that aimed to analyze the Twitter follow graph of last year’s FOSDEM and CHAOSScon participants. The project used open source tools and agile management: data was collected with a command line tool (Twarc), network visualization was done with Gephi, and Framagit provided a collaborative framework for managing code, data, visualization and text. Implementing this project led to insight about what it takes for social scientists who have minimal tech knowledge and culture, to practice open research with FLOSS tools. This talk will specifically address three types of challenges I faced : training, documenting, and collaborating. This talk will be particularly relevant for people who are designing FLOSS tools in interdisciplinary research environments. More generally, it can be of interest to people who attended the Open Research Tools and Technologies devroom talks in 2020.
10.5446/53332 (DOI)
París, é unha around 150 members onde unha moa menos é permanente as praséaturas. O trabalho que é feito no laboratorio é ao final de processación imáxida, networks, real-time algorithms chamada Miratorix, Informatics, Linguistix, Signal Processing, etc. O laboratorio é unha importante, a produção de software para que este trabalho seja reserrado. Eu estive no laboratorio 20 anos atrás. Em 2006, a dirección me preguntou para estudar a implementación de serviços associados ao desenvolvimento do laboratorio para promover a visibilidade deste trabalho. Parte do trabalho foi feito dentro de um projeto nacional coletplume. Como veis, todas as referencias estão completas no final deste trabalho. Neste projeto nacional fizemos a publicación de descripciones de software e tantos estudios sobre esta produção. Mais reciente tenho sido involvente na propósito para o plano de management de software. E, no momento, o trabalho que se transforma no trabalho sobre a evaluación de software e infraestratas e tantos estudios. Em 2006, o primeiro issue que gostamos foi para estabilizar a lista de produtos de software no laboratorio. Eu fui a ver o colegio e perguntar o que fico fazendo. E, algo, eu me sorpré, porque não foi claro para eles se o que fico fazendo poderia ser considerado a produção do laboratorio. Não era o caso do everyone, mas um loto de software não foi identificado e não existía a lista de autoros e o date quando o software produziu, quantas versões, onde foi. Algunos software foi bem disseminado com web pages ou com forros, mas não foi o caso do everyone e foi importante estabilizar a lista de produtos. O laboratorio tinha interés real na produção de software free, mas, xa xa, não existía license todo dado o software. E, algo, foi importante quem decidia o license e entender as legais asas e os autoros. O laboratorio precisa clarificar as legais para tomar decisión sobre o que foi o software do laboratorio e como se o license e disseminado etc. Outro importante é o valor da produção do research. E, para isso, foi importante propós evoluções nos procedentes de research. Aso que o que você pode ver os problemas que nos encontramos son de uma nature diferente. Algunas são legais, algumas são políticas, algumas são asas científicas. Então, isto requere diferentes aproxes para conxer os problemas, mas o que podemos dizer é que os problemas son todos os mesos no ándio científico e em muitos laboratorios son the same. O primeiro problema que eu vou contar é sobre o definición. Nós começamos a propósito o definición de um laboratorio logístico com três diferentes características. O software é o que o researcha. Um membro do laboratorio participa no código de cústia. O software é asociado a publicaciones do laboratorio. Mor recentemente nós propósimos este concept de software como um set de code que foi escrito por uma equipe de research. O software foi construido e usado para producir um sáfaro que é disseminado na contribución científica. O software é um set de files que contén o código de cústia e o documentación específica exemplos, licencias etc. Outro de as questões que eu vou contar hoje é como para dedicar a lista da produção do laboratorio. Então, este foi feito no projeto nacional colado no Primm onde fizemos esta publicación de descripciones com um set metadata metadata cáceres, clasificaciones etc. Asasasasasas contén a descripciones e links para as publicaciones. Con este objeto foi muito fácil de ter a lista da produção do laboratorio e ter toda a visión de o que sucedia. Quando o projeto foi cerrado nós continuamos a coxer para ter esta lista de objetos que o identifiquen. Por último nós estamos coxando no 11 de univercite Gustave Fael para setar as políticas da universidade para ver como se relaciona com a produção do laboratorio da universidade. O que é also posible agora é de papers do laboratorio. Por exemplo, é was no posible 15 anos ago. Por exemplo este journal o artículo e o laboratorio ao mesmo tempo o jornal tem interface onde o experimento do laboratorio para entender o artículo etc. O outros publicaciones são como este journal onde o publicación o jornal mas o software é away por exemplo, no GitHub ou en outras plataformas. Outro problema, como eu disse é a procedión de disseminación. É muito importante o que sobra antes de dar a distribuir o software. O license sobra quando choque um nome se o separe o traque dos autoros das funcionalidades novos etc. Esta procedión pode ser adaptada a tantas situaciones diferentes incluindo o data de research. Um view mais forte deste tipo de trabalho é o plan de software. Este é o documento onde o informe sobre o software O sete os gols e o separe para ver o que o xeo o que o xeo etc. É um tool importante para manejar o software Outros templos que estão propósitos por exemplo o Instituto de Sustainability no Kendo Unidade. É importante entender o legalismo e nós estamos entendendo o laboratorio de computer science e não temos any idea sobre estas questões Então nós fizemos o trabalho para entender o legalismo e as políticas científicas. Fizemos esta comparación com a producción principal do laboratorio que são as publicaciones, artículos, books etc. Este trabalho é publicado na França. Mas se a França é un problema para você por favor venha para mi e eu seré muito feliz e que o trabalho é o máximo de todos os referentes que estão a final de este trabalho que estão con o legalismo de políticas científicas de uma forma ou de outra. O último problema que eu vou falar para vocês hoje é o valor do trabalho de a produção de software de receas. Para este o que nós propusamos no colaboración com o Tomas Retio é para propusar o protocolo de extensión para evaluar software de receas com 4 estes o primeiro para con o identificación e as xexas e o segundo o terceiro é para os aspectos de software e o quarta e os aspectos de research é muito importante para nós para separar quando você é evaluado do software e quando você é evaluado do research. Nós pensamos que há dois aspectos diferentes que serão no mix e alguns cometes que considerarão mais importante e outros que nos colocarão mais énfasis nos aspectos de research então é muito claro quando considerar o quarta o outro. Este protocolo é muito flexível cada cometes pode ser adaptado a situaciones diferentes e o contexto este é o último slide nós temos soluções proposas para os problemas que nós nos encontramos nós temos soluções proposas que são generales mas adaptadas a xexas de xexas diferentes e o lab tem o que se adapte sobre o software de xexas as pessoas aquí agora é melhor o que deve ser feito e como deve ser feito isto não significa que o stop nós temos o stop do trabalho sempre há novos membros novos estudiores então nós devemos continuar en dizer como se relaciona nesta producción há o trabalho ongo no nível de a universidade Gustave Thel para estabilizar policies open science nós temos contribuido ao trabalho internacional de xexas software náxis mónios grupos de trabalho por exemplo, con o sítio por exemplo o sítio de xexas implementación de xexas 11 no RDA há grupos estudiando xexas software e nós pensamos que con este trabalho nós temos contribuido a mejorar a situación de xexas software many thanks for your attention bye
In this talk we present the experience of the software produced, as part of the research activities, at the French Gaspard-Monge Computer Science laboratory (Laboratoire d'informatique Gaspard-Monge or LIGM in French). The LIGM has an important production of research software with 66 software items identified for the period 2013-18, where 50 have been disseminated as FOSS. Since 2006, the Lab has worked in order to improve the dissemination and the visibility conditions of this research production [1], which includes the adoption of policies and the production and maintenance of a catalogue. In this context, one of the first issues we had to deal with was the concept of the Lab's software, in order to answer the questions of the researchers. The second concept was the identification or reference (or citation form) of such objects. The concept of "Logiciel d'un laboratoire" [1,2] initially proposed in 2006 has since then evolved to become research software ("Logiciel de la recherche") and it is a central concept in order to propose services and infrastructures to deal with this production in the current Open Science global context evolution [3,4], where the goals are to make it visible, accessible, reusable. It is also a central concept in order to propose research evaluation procedures related to this scientific output [5].
10.5446/53336 (DOI)
Bonjour and welcome to this Lightning Talk. In the next 10 minutes, I will present three of PanRei's main features. Together, we will first learn how to load and explore Twitter data sets, second how to connect to a Hive instance in order to further explore its WebCorpI, and third how to retrieve, curate, and visualize free hydrated, bio-archived results. In doing so, we will each time follow PanRei's central dogma, that is the flux type sequence. This central dogma is that first you need to retrieve, curate, and provide a version or perspective on a data set that you want to explore, and then once this is frozen, you can submit it to the types, to the relevant kind of visualization that you can use to better explore or retrieve insights from this specific data structure. Let's go! This is what you've seen when you first started PanRei. You have those two buttons that are disabled and you don't really know what they do. You can move the car around, but that doesn't really help either. And you have this call to action that you can click on that will start a tutorial and is apparently the only thing that you can do. So for now we're going to bypass the call to action by unlocking the menu and we're going to toggle the flux. So this is the flux. As you can see, you have a series of tabs and each one of those is to be used to retrieve and curate data, retrieved from variant sources. So here we're going to choose the Twitter tab and use it to retrieve a specific kind of Twitter dataset, one that has been retrieved through the Media Labs, GazooIoR Python service. This requires two different files, the CSV file containing all the tweets and the requests that we use in order to gather the tweets. Right here I'm using a Nobel Prize 2020 request that has been done when the CRISPR Nobel Prize was given to Jennifer Daubner and Emmanuel Charpentier. There we are. So now the dataset is in the system. However, we can't use it just as of yet. We first need to send it from the system to the relevant type. And that's what we're going to do right now. We're going to select this dataset that we just sent to the system and we are going to export it to the GazooIoTip which is the most relevant type for this kind of data structure for us to visualize this dataset. And in doing so, we have respected Paneray's central dogma. First we have loaded the data through the flux and now we're about to visualize and explore this data using the type. GazooIoR is an amazing tool because it enables you to retrieve humongous amounts of tweets without having to pay anything to anyone. However, you might not be sure of the best way to classify this data that you have just obtained. Paneray provides a way to start exploring this dataset and see what's inside. This particular dataset contains a bit less than 300,000 tweets. It's been built from this request of CRISPR, Daoudna and Charpentier and it's a bit aimed at the time the Nobel Prize 2020 for chemistry was given. As you can see, we have a huge spike that starts from the announcement and then several waves of interest. I want to start right here. So as you can see, every dot here is a tweet but they are not all the same shade of blue. The darker the shade of blue, the more retweets the original tweet got. So when you see a dark blue dot, that means probably that it's actually a retweet of a previous tweet. All the tweets that are highlighted in orange right here are the same tweet as the one selected and this particular tweet links back to the red line that you see towards the first occurrence of this tweet. Okay, so let's choose another one that would have carried the session. So this one echoes or links back to the first tweet by the Nobel Prize account. You can see that the embedded media is featured in the tooltip and so clearly, well apparently, according to the prevalence of the orange in this sequence, this was one of the tweets that carried the session. Now if we go towards what appears to be the second or the third wave, we can see that this orange is still here but it's not leading the charge. So if we try to inquire in other dark blue dots, that tweets that have been retweeted many times, well we see that for example this one appears to have really carried this third wave and it links back to a tweet by the France TV Berlin account that is about apparently an interview that she gave to them. So this way, well using this exploration tool, you can have an idea of what are the different waves of tweets that you had in the period you have inquired with Gazouloire and what kind of phenomenon maybe have carried those waves into the public interest. Let's now move on to the second part in which we will connect to a distant hive instance and retrieve the corpite it contains. For the retrieval part, I'll choose the Media Labs demo that is made available to anyone and you can see those are the corpite available at the time I'm loading it and I can load randomly three of them that have a few web entities and so Panorazer is going to connect, retrieve the web entities and pour them into its own database. For the vis part however, I will show one of my own corpite that I did for a different project. So as you can see this is an island of different web entities, you can see the metadata in the tooltip and this corpus has been tagged which enables me to show only some of those web entities and to recompute their density according to the data that is preloaded in the corpus. You can also see that when you click on a web entity you can display the metadata that is given to us by the Hive API. Now of course it's only one way among many others to explore Hive corpus. Its main advantage is that it's fast and it can give you a quick insight on your tags and how they are situated among one another. We can now move on to the third part and what is probably the most interesting part of this presentation. Pandora is not only interface with other services, it can also retrieve data on its own. Using the flux feature we will now retrieve a corpus of results from BioArchive on COVID from the 1st of January 2020 to the 1st of May 2020. As you can see this request yields 431 results. In order to get fully hydrated results, Pandora first scrapes results pages for DOIs and then submits those DOIs to the BioArchive API. But how do we curate those results? Well, it's simple, we go back to flux and we send this data to a service that does just that, Zotero. Using Pandora you can send any dataset that is compatible with the Cyton-style library JSON format to Zotero for hand curation. After giving it a few seconds you can open your Zotero software and have a look at how the collection is being created and how the documents are being poured in. So now this is a regular Zotero collection, as you can see more and more documents are getting poured in. And so you can do pretty much whatever you want, you can edit documents, you can retrieve data, you can add documents, you can remove some of them. By the way you can check that this clearly has been fully hydrated. And so, well this is a regular collection, so you can do pretty much whatever you want. But now that we have curated our dataset we need to import it back into Pandora and start visualizing it. So according to our very well-known central dogma, we need to import in the flux this Zotero collection and then we'll be able to export it thanks to the system into the relevant types. So let's do just that. Now it's in the system, we go back to flux, we go to the system tab and display the available datasets, we see the CRISPR noble that we had before and now this new COVID bio that we just imported. I'm going to select those two and export them to the system by removing the date. And there we are. I don't have much time left so I will end this presentation by showing one of the ways Pandora provides you with to explore these kinds of datasets. As you can see it's a pretty classic way to display data, you just document through time and documents with similar authors are linked with the red edge. Thank you for your attention and don't hesitate to shoot me a question on Twitter on GitHub.
PANDORÆ : Retrieving, curating and exploring enhanced corpi through time and space Mapping the state of research in a particular field has been made easier through commercial services providing API-based bibliometric-enhanced corpuses retrieval. Common assertions such as “the use of CRISPR technologies has skyrocketed in laboratories all around the world since 2012” can now be easily verified in both quantitative and qualitative perspectives using those platforms. Such services as Elsevier’s Scopus propose inbuilt functions to explore corpuses chronologically and geographically. They don’t, however, allow for hand curation and enrichment of the corpus. This lecture advocates for a solution to this methodological issue using PANDORÆ, a free and open source software designed for that purpose. PANDORÆ requests corpuses from the Scopus API, enriches its data by geolocating each document’s affiliations, and then uploads the resulting dataset to a Zotero library. The user is then free to curate the corpus, adding, editing or removing items. PANDORÆ allows downloading it back from Zotero to its internal databases, and to display the enriched corpuses on a map, on a timeline, or as an author-directed force-layout network graph. This presentation will also introduce more advanced PANDORAE features, such as displaying Twitter dataset obtained through Gazouilloire, mapping web entities loaded from Hyphe and scraping biorXiv results using Artoo.
10.5446/53337 (DOI)
Let's start the Q&A. Thank you Travis for your presentation. We have a lot of questions. The first one was about the relation with MIT Media Lab. How the cooperation collab of MIT Media Lab help or shape the research differently? Can you elaborate on that, please? I think there's probably a lot of cultural factors that play in here, so I don't know if it's a direct answer to how undirected corporate funding affects research in general, but what I found was that it made for a much broader span of creative output. On the one hand, you might get some really bad ideas because there wasn't the normal checks and sort of proposal process that would be vetted, but on the other hand, you would get some really crazy at first sounding ideas that turned out to be powerfully different in a very good way that you wouldn't have been able to get if you need to go through the traditional models of feedback and proposal review. I think it made it a riskier research environment overall and I think that led to some really good outcomes and it leads to some people feeling like if they had the structure of a more formal process, they would have done better. So I think it's got pros and cons, but in the end, I think you wind up averaging about the same. Okay, yes, you need to find a balance between traditional and the creativity of MIT. Okay, other question, yes, about your legal policies. I know that you are using a Creative Commons license and it's related also to your business mobile that you can use different license, depending on if you have some premium model, I can remember that. It's free if you are using CC by license, I think. Yeah, so we, the cause of that or the origin of that is really trying to figure out a business model that would allow for the large professional organizations who are doing publishing to be the ones who are paying and then subsidizing for the researchers and academics and universities to have subsidized or free access. And so one of the things that we found was that a lot of academics and universities and librarians want their work to be open and freely accessible and so they want to be using these Creative Commons licenses. Whereas if you have a commercial for profit publisher, they are more picky about about having their own custom license and so they want that, that extra feature and so that was one element where we say okay if we charge for this will actually get all these commercial providers to subsidize the infrastructure for the nonprofit academics or researchers. And I don't think that's that has been perfect. It's not like that was some clever insight that meant we have all the, all the funding we ever need. But I think that's the general game we're trying to play of, of figuring out where, where are the differences between how commercial operators function and an academic or nonprofit publishers function and how can we make it so that we can subsidize the tool for, for those open cases. Perfect. Another point it was a comment, Paul comments. So your point is that helping users to use floss will mean help starting companies selling service around the open source code. What is your point about that. I don't know if that's my point. I think the, I think the point I have in mind is more that I don't know what the right vehicle is, or what the right model is. I think, I think companies and traditionally funded for profit companies bring a lot of problems. I also think that trying to build these tools and piece of infrastructure in house can come with other problems like the ability to have a 20 or 30 or 40 year dependency and sustainability model. And so I, yeah, I think I put it out there is saying, well, what do we do, I don't, I don't know what the right tool is our approach so far has has been that the nonprofit model gives us the least, the least number of downsides for the positives that it provides but I'm not necessarily confident that that's exactly what it ought to be it's just what's been working so far. I do have a slide that I use a lot in another talk where for a lot of a lot of other things it's very clear who the player and who the organization should be to build a thing. Libraries fill a very particular role. Universities fill a very particular role. Government fills a very particular role, but for the role of building sustaining maintaining public digital infrastructure. There's not an organization there's not an institute type that's dedicated to that, and it feels like there definitely could be because relying on Google or or Bitbucket or Atlassian to kindly host the things we need for science and research in academia is has not been I think what we, what we all want it to be. Yeah, sure. Okay, another question about the difference between pub pub and other alternatives such as overly, for example. Yeah, in my mind they're, they're pretty different products. Overleaf is very focused on the collaborative latex writing process and pub pub is a little more is more generally focused on the whole publishing pipeline so you can both collaboratively write your document, more like Google Docs as opposed to as opposed to using latex, but then you can also publish it you can version it, you can make it publicly visible assign a DOI to it, host an entire journal on the platform and so I think it's just it's closer to something like, like medium than it is to something like Google Docs. It's sort of whole publishing platform. And then there's a more technical question about semantic technologies like RDF bio schemas does the platform support that. Yeah, I guess some ways now. There's, there's a whole bunch of sort of media components that you can embed into articles and and, and that's a pretty flexible and open source, like component library that people can contribute to and so just like you can embed that embedded video or a PDF file, you know, we want to build a widget so you can bed Python notebooks and things like bio schemas and structured visualizations like that. It also depends on for exporting how we export content based on the archive that needs that that needs metadata to archiving so for some archives, we do export again a suite of different metadata standards. So if you don't have right now direct support for bio schemas, it hasn't come across from from the users that we have so far but happy to happy to look into that and always support with things like that one, when there's a need for them. Okay question but like open journal systems. Is there a relation with with open journal system. Not on paper but we're friendly with them and they're good guys and a good team overall and so I think we take a lot of inspiration from them. I think their model is has been working for a long time and is nice. I think our, our focus tends to be on slightly broader audiences we have a lot of journalists we have some government agencies that use a platform and so we're not as specifically focused on academic journal production, even though you can do academic journal production. And so I think it's just a little bit of a broader remit then then they focus on which sometimes good sometimes means we're not as in the weeds on very specific academic needs. So, again, yeah frozen cops. Maybe the last question about the last point of your presentation about a fair distribution of power is on their contradiction between a fair distribution of power and the existence of an institution to support it. What is your. I don't think there's a contradiction but there's a way that you can do it badly governments or institutions and and fascist governments are not a fair distribution of power and democratic governments are closer to a fair distribution of power. And I don't think I don't think the lack of that institution I don't think anarchy is particularly successful at fairly distributing power necessarily and so I don't I don't know if. Yeah, there isn't a contradiction in my head but I think there are ways of crafting an institute that works against the goal of fairly distributing power and it's, it's important to. Yeah, not not pretend that any institution is going to, going to work. I think we reached the last question you have any comments. Last message to share just appreciation for the questions. Thank you all for listening. And yeah, I'm happy to hang around if there's other questions and looking forward to the other talks. Okay. Thank you Travis and talk now will continue. You can people will be able to reach and to join you here to continue the conversation. And now we have the next presentation. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Having started on a very typical path for open source projects - needing to solve our own problems - we were struck by the challenge that our technology wouldn’t be sufficient for our eventual goal: improving the culture and process of scientific publishing. The challenge is not only technologically broad and complicated - but there are also enormous cultural and operational logistics needed to approach real solutions. Not least of which is the ability to provide resource-constrained, technologically-limited organizations with the stability and support they need to make commitments that will last years if not decades. We’ll share our experience with PubPub and the Knowledge Futures Group (a non-profit organization dedicated to building digital infrastructure as a public utility), what it’s taught us about building sustainable open source products, and welcome contributions to make our work more supportive and inclusive of the entire knowledge community.
10.5446/53338 (DOI)
Hello everyone and thank you very much for giving us the opportunity to talk about our project today. I'm Giorgio and I am part of the team of people who design, develop and maintain ROGRAPH. The team is composed by Density Design, our search lab at Politecnico di Milano that is also been the birthplace of the project, initially conceived and developed by Giorgio Caviglia. Then there is Calibro, a design studio founded by Mie Matteo Atzi after our experience at Density Design. And then we have in Magic a development studio focused on open source web application, mobile apps and data management systems. The project is seven years old and today we would like to briefly talk about the reasons why we created this project, in which context and how it's used. And we will also talk about how the project is evolving and give you a preview of the upcoming new version that we will release publicly in a few weeks. So I think that some of you may be familiar with ROGRAPH but I would like to start from the beginning. And first of all, what is ROGRAPH? ROGRAPH is an open source data visualization tool and framework built with the goal of making the visual exploration and representation of complex data easy. It was conceived mostly for designers and it helps creating semi-finished vector visualizations that are easy to refine and aided with other softwares. The tool is freely available at ROGRAPH.io and I would like to show you a very quick demo on how it works and the steps that you need to do in order to produce a visualization with ROGRAPH. So first of all, you need to take a data set from any spreadsheet software, you copy and you paste it inside ROGRAPH. Then you can select a layout within the charts built on top of the 3JS that are available. Some of them are common charts like scatter plots or bar charts while others are a bit more difficult to reproduce with other softwares like alluvial diagrams or horizon charts. After this, the user has to map the dimension of the data set with the visual variables of the selected charts through an easy to use drag and drop interface. For example, for a scatter plot, the visual variables you can define are the x and y axis, the area and the color of the bubbles and the labels. As soon as the data set dimensions are mapped, you see the visualization appear below and you have the possibility to change some options and finally download the visualization as a raster image or as an open vector file. The tool, as I said, was originally created as an internal tool and a site project-attended design, our research lab within the communication design department at Polytechnico di Milano. As designers and researchers focused on the visualization of data and complex phenomena, we were always looking for new tools and technologies that could help us during our daily activities at the lab and for producing these kind of outputs. In our very iterative process of exploration and visualization, we always use different tools and approaches. At the time, there were not so many tools available like now for producing complex and non-conventional charts. Some of them were either expensive or others required development skills and a lot of time. So we thought that it was a gap and we decided to make our own tool. Here, you can see one of the very, very first versions of the tool, a simple interface, no frameworks, just JavaScripts and HTML and a few selection of charts that were made with the 3JS. After a while, we started working on more functionalities and we improved the user interface. We added more charts and further details. We added more charts and further functionalities. And while we were developing it, we understood that what we were doing could have been useful also for other researchers and other people with the same needs. So since we were always been fascinated and also thankful to the open source community, we found natural to publish the code and the tool with an open license to allow everyone to use it, collaborate and improve it. Designers, students, journalists, historians, scientists, activists started including row graphs in their toolkits and they also started sharing their work done with row graphs with us. Here at this page of our website, you can see some of the examples that were made with row graphs. Another positive note and a positive feedback is that the tool has also became widely used in the tactical activities and many resources, tutorials and presentations were shared online. So from 2013 to 2019, apart from some sponsorships, we kept on working on row graphs as a site project and basically for free. We added the charts and features according to our needs without following a clear roadmap of development. So we got to a point where we realized that if we wanted to keep on working on row graphs and if you wanted to keep row graphs still updated, we had to invest more time and resources into it. So this is why in 2019, we decided that it was time to find a sustainable way to maintain the project. The goal was to fairly compensate the people that worked on it and also expand our team, dedicate the right amount of time and resources and skills in it. So in order to accomplish this, we launched a crowdfunding campaign on Indiegogo to raise funds to support the design and development of a brand new version of row graphs. And we went for the crowdfunding option because we thought it was an interesting way to get in touch with our pretty wide user base and also to continuing being dependent. The most important thing for us was to keep the tool 100% free and open to everyone, staying somehow true to the original spirit of the project. And after a couple of months, we launched our campaign, we likely reached the goal and we got general sponsorship from academic institution, research lab and private companies. We use row graphs regularly. We received more than 400 donations from 40 countries and since we reached the goal, we kept the campaign open for the whole year. Here are some of the sponsors that have generously supported us. So after the campaign, we dedicated a full year to design and develop what we promised in the campaign. And in September, we released the biggest update of row graphs to the main supporters of the crowdfunding campaign as a perk for being so generous with us and as a way also to get feedback and insights about how the new version is received. So let's get into the details of this new version and see some previews of what will be released to everyone in the coming weeks. So first of all, we have to clarify the fact that this is not just a simple update of the software, but is a completely refactored version from the core, the row graphs library, to the interface. The three things that are at the center of the project, so the library, the visual models and the interface have been divided in three interlinked repositories released under Apache 2. The very first period of development was spent on defining and implement a brand new library, as I said. So the new library is written in ES6 and makes use of updated releases of third-party libraries like this 3JS. Our goal is to have a more robust and flexible library to better handle the data input and ease the implementation of new features and new charts. And thanks to the fact that the row graphs library is independent from the row graphs application, it will be possible to use it in any web application created with HTML and JavaScript to bring the simplicity, the data mapping concepts and the visual models of row graphs to any other project. The library that will be available as well in the coming weeks as a free and open source library. The library that will be available as well in the coming weeks with the app will be published in a separate and dedicated repository with documentation about how to use it and how to integrate it in other projects. After the library, a lot of effort has been also put in refactoring already existing charts and adding new ones based on the request of our users. It's a mix of conventional and unconventional charts that allows different types of data exploration and data visualizations. The app and the interface of the app was completely redesigned, keeping in mind the design principles that we used from the first version. So a simple one-page app divided in different sections, each one dedicated to one particular aspect of the process. So now I would like to present the new features of row graphs and give you a preview of the new app that will be released in the coming weeks. So the first part is related to the loading data process. So as in row graphs one, you can do input different kind of data. You can paste it from a table, you can load a CSV, a CSV, a JSON, or you can load a file from a URL. What we included now is a completely new part that is related to the possibility to specify data types and to change the data parsing options. Then when we go to the section about selecting the chart, we can see that we expanded the collection and we presented and organized it in a better way. The users can filter charts based on the type and they will have access to the source code and to the learning guides. In the future, we will also add a possibility to load custom charts in an easy way. Then if we scroll down, we go to the mapping section. The mapping process is very similar to row graphs one in terms of interaction, but we introduced the possibility to choose also the way you aggregate data. Since in the previous version, the only way was the addition. So for some charts, we also included, as you can see maybe in the preview above, the possibility to create series to automatize the work of producing a lot of charts. Then if we scroll down, we can see the preview of the charts and we can see that we added a lot of options that were not present in the first version of row. So even if row graphs is still focused on producing, let's say, semi-finished visualization, we felt the need to add more control over the output. And thanks to the new library, we were able to introduce more options. According to the type of chart, the user is able to manipulate the artwork options, the legends, edit the color scales, and other options related to each chart type. Here there are some examples of the different layouts and all the different options that you can change. And then we get to the export section and is where we can see one of the biggest updates of the project. So as in row graphs one, you could export the raster format, so PNG and JPEGs, or open vector format that can be then refined with other software or can be included in web pages as SVG. What we included that is completely new is the possibility to export projects and reopen it again in other sessions. The project is a file that includes the data, the mapping options, and the charts options. So you can go back to the app, open the project, and start from where you left. So at the moment of this presentation, row graphs 2 is available only to our sponsors and to the supporters of the campaign. But by the end of the month, it will be publicly available to everyone and the repositories as well. For us, it will be a very important moment to collect feedback and publish tutorials, guides, and documentation and work on improving functionalities, fixing bugs that we will encounter and involve also the community around the project. After this phase, we will also think about other ways to keep the project financially sustainable through other types of sponsorships and donations. And that's all for today, and we would like to hear your feedback and questions about our project.
RAWGraphs is an open source web application for the creation of static data visualisations that are designed to be further modified. Originally conceived for graphic designers to provide a series of tasks not available with other tools, it evolved into a platform widely used in research and data journalism contexts that provides simple ways to map data dimensions onto visual variables. It presents a chart-based approach to data visualisation: each visual model is an independent module exposing different visual variables that can be used to map data dimensions. Consequently, users can create complex data visualisations. Finally, the tool is meant to produce outputs that are open, that is, not subjected to proprietary solutions, which can be further edited. Thanks to an intuitive user interface and experience drafting visualizations become an easy task, enabling the user to produce the visualizations not only as a mere output but also as a tool within the research process. Last year we launched a successful crowdfunding campaign to raise funds for the redesign and development of a new version of RAWGraphs, that will be released in the first months of 2021. The new version is written from scratch with the aim to make the tool more flexible for customisation and to create an active community also on the development side. The talk aims at presenting how RAWGraphs has been used in research context and our strategies to keep the project free, open source, economically sustainable and independent.
10.5446/53339 (DOI)
Welcome to this presentation about the Replication Wiki, which informs on Empirical Studies in the Social Sciences. I founded the Wiki in 2013 during my PhD studies in Economics. To motivate the stock, first let me try to convince you why we need this Wiki, then I'll show you how it works and which stock we use, and finally what we plan for the future and thus where you could contribute. To give you an example of what this is all about, let's think of something that is really relevant to each of us at the moment. The Corona Crisis. It's not only a health crisis. To protect the population we have lockdowns in many countries. This has the side effect that we cannot admit in person now, which also has good sides as like that more people can participate and form anywhere. But in general it freezes the economy, it causes unemployment and a massive loss of income. There are many social science aspects to the question how to best react to this crisis. From an economics perspective some may say let there be creative destruction, survival of the fittest, those that manage challenges like a crisis best are probably the most able leaders in other circumstances as well, so they should be awarded for their success and prospering and for the others there will be some trickle down effect. Others may find such an approach would involve too much hardship to bear, there might even be unrest going much further than mass demonstrations of yellow vests or people storming parliament buildings as was seen during the Great Depression in Argentina. One proposed solution is the universal basic income. Something it would be too costly, people might just become lazy. Others say it saves an enormous amount of administrative cost because no one has to check who actually leads her. If you better when not under pressure or social stigma so they can do what they really want and might even become more productive. Like in sports and politics and economic policy everyone has an opinion. Pope Francis who saw in his home country what severe crisis can do to people suggested in a recent book that the universal basic income could be part of a solution. But how do we find a scientific answer to the question whether a universal basic income can happen in times of crisis? What economists do is build a model of the world and try to argue why different factors are related in certain ways and how different policies would have different effects. Some say that such models can always be tweaked in a way that they predict what the researcher wants to show. So an additional approach is to investigate numbers from the real world, work with data, that's what we call empirical work. There are still many different approaches, what numbers should be used and how they should be investigated and many studies trying to answer the same question come to different conclusions. That can be frustrating. What many people may not know is that traditionally social scientists just made their calculations, published the results in academic journals and then the readers had to believe them or make their own calculations. But the data and the code of the calculations were not shared. So in case different studies came to different results on the same question it was difficult to identify the exact reasons for the discrepancies. So politicians could just pick studies that came to their preferred result and use them to justify the introduction of the policies that they wanted to implement anyway. In 1986 then researchers used an archive that an editor of a journal had kept of all the data and code of empirical studies published and they tried to re-run the calculations to see if they could get the same results. That's what we call a replication of the study. It turned out that in most cases the original results could not be replicated and a big discussion started why this was the case and what could be done about it. And this interesting thing the internet became accessible to more and more people and the first journal started to just put the data online so anyone could check results. It took many years until more and more journals followed that example and there are still many journals that do not enforce rules that empirical work should be accompanied by the underlying data and code. There are also cases of confidential data, think of medical data or census data. Many people would not want their income data to be available to anyone in an online appendix or some academic journal article. There's also the case of proprietary data. If a firm collects valuable data and sells it to researchers these may get valuable scientific insights from the data but will not be allowed to share them. This is basically why I set up the replication wiki. I wanted to give an overview of empirical studies and the data and code availability and also of replication studies and their results. The case that attracted a lot of attention to replication centered around a study by two economists of Harvard University on the relationship between government debt and economic growth. It became influential in politics as it presented an easy to understand stylized fact that can be used to support the case for austerity policies. The economies of countries with government debt higher than 90% of GDP grow much less. Thomas Hinton, who you see here, was a PhD student at the time and found that in the exercise used for the analysis certain data had been left out. He argued that after correcting for this and making other adjustments that he regarded necessary the stylized fact disappeared. Thomas gave lots of media interviews around the world and was invited to present his work at many institutions. We gave a workshop together on replication in San Francisco. He became economics professor at Loyola Merriman University in Los Angeles by now. One of the authors of the original study now is chief economist of the World Bank. There is still an ongoing discussion on how the data on growth and debt should be interpreted and what we can conclude for our current situation. What we can certainly learn from this is the following. Every non-economist can find errors in exo sheets. If it can be as easy as that to initiate a worldwide discussion about pivotal questions in economics then more transparency by making such material available can definitely help to improve the quality of empirical social science research. So this is why I founded the replication wiki, whose main page you see here. It is a database of by now more than 4500 studies from the social sciences for which empirical methods were used. It lists which of the studies have data and code available online. In cases where replications are known, they are classified by their type and results. At the bottom here you see which software was used. Media wiki is the software that is also used for Wikipedia. It works well for text-based collaborative pages with images. What we wanted is a kind of database structure that allows for complex searches. Typical example is an instructor who searches for studies that have replication material available, employ a method that is truly taught, use the software that is available to the students and cover a topic that is regarded interesting. To research like that in Wikipedia, say you have seen a band with keyboard, guitar and drums and the single first name was Jim, all the information is there but you can only use full text search and categories and won't get a nice list of all measures. Semantic media wiki allows such a structure. For example, what software was used is stored as a property of each page for a study and you can easily search for all other studies that have the same property. The number of extensions help us. For example, a form extension allows users to just add information to predefined fields. No markup language is needed that deters unexperienced users. Some of the extensions could benefit from adjustment to our purposes. For example here you see this form that we use to enter new data that allows easy editing but can only produce one new page at a time. It would be good for us if there was an option to create several pages at once with such a form. And here you see why. This is a typical page for a replication. On the top you see bibliographical information, the second line informs about availability of replication material, what methods, data and software were used and the third line informs about the original study that was replicated. That study also has its page on the wiki with the same information about it and potentially more, for example if there was also another replication on it. It would be great if a form could change information on both these pages at once because currently everything has to be entered twice. That causes redundancy, potentially contradicting information if errors are made when entering the data. It's probably not difficult to find a solution but we would need someone to work on this with us. On top of what I just explained we also face other technical challenges. One is that our forms cannot be used for relations between more than two studies. I already mentioned that some studies get replicated more than once, some replications also replicate more than one original study and some replications get also replicated. So instead of one-to-one relationships we need M-to-N relationships and we do that with templates and markup code but that is user unfriendly and creates redundancies. We already found an extension for multiple instance templates but we need to implement what we want here. We would like to create automated links for selected content of properties. For our authors you can just click on them and then see all their other studies that are covered by the wiki. For data sources for example we also have information like which years they refer to in the same property field. For the years it doesn't make sense to create links but for the data sources it would be useful so it would be great to have a solution for this. Finally we would like to work on our search options. This is our advanced search form which has many options and allows for complex searches. Works great for me but it's great to complicate it with its syntax and very user friendly for others who just want to try out if this is a user service for them. This year is based on a very useful extension called drill down. You can just click on categories and select values of properties. Unfortunately it's not always possible to click on multiple values of properties. Say you search for all studies from 2014 and those of 2015. I found out you can type that into the URL then it works but again that is very user unfriendly. I wrote a bug report and the developer who created the extension even showed personal interest in our wiki and interviewed me about it but he's busy with other stuff and at the moment no one is working on fixing this. In spite of some technical imperfections the wiki already helps to build on existing research. You can compare your results to those that were already published and find out what material is available. If more than one study already addressed the same question you can see if the results were the same and if not what the reasons for this were. For example different methods or data or if there was an error in the study or if different authors just interpret results differently. The wiki helps to identify practical examples for social science education as one no longer has to do the tedious work to go through all the single journal archives to find studies that could be useful for our course. We already had teaching corporations with universities around the world. We see online that the wiki is mentioned in instructions for courses also at institutions that we did not have direct contact with and we were recommended by learned societies and libraries. We also used the wiki for some of our own research for example here I showed how the use of software is distributed in economics. In our overview of databases used you can see that the economics literature is very much dominated by the red ones from the United States so there's a lot of opportunity to check if results generalize if you check them with data from other countries. At the beginning we acquired about 200,000 dollars for this project. We have had more than 6.7 million page views and more than 280 users registered. I was invited to present the project at many places various blogs and the media reported about it. We have a link exchanged with a very widely used project research papers in economics. I could publish about it and the wiki was cited numerous times in academic works across social science disciplines. We are cooperating with a number of associations and institutions and are open to further partnerships. For the initial phase we had student assistants generating the content. That worked very well to show that this is a user service. It is however costly and not easily scalable. Now that we have explored what kind of data we can generate we would like to use machine learning and natural language processing techniques to generate content. First we need to identify the empirical studies in the literature. There are also theoretical studies, book reviews and other kinds of content in academic journals that we are not interested in so we need to sort them out. Then we would like to classify studies by whether code and data are available. We already have code written in R for this. Next we would like to identify what data was used. The witch context project has been working on this for many years and with many partners internationally not only from academia but also from the software industry like Google, Facebook, artificial intelligence. The results are still far from perfect but they started a competition. If you want a real challenge I encourage you to participate. It might be easier for us to at least classify studies by a geographical origin of data. Next we would like to classify studies as replications, corrections or retractions. All this has turned out to be very useful to the community. If there is data out there there are usually also scientists who try to analyze them and publish about this even if it does not make too much sense. There have already been studies that use data from the beginning. In the first one here computer scientists try to predict replicability of empirical studies based on the words used in them. The reasoning behind that could be that if researchers are aware that their results are not strong they might use language that is a bit vague and this could be detected. In this case I could have told them right away that this would probably not work with our data. First, because there is so much bias towards replications we exclusively getting published if they find something problematic in the original study that the sample is heavily influenced by this. Second, the types of replications are so different, some use different data, some different methods, if the results are not the same that does not necessarily mean that there was anything problematic about the original study. One would need a much cleaner kind of experiment where the conditions are the same for each replication to get a clear picture about which kind of studies are replicable and which are not. There was another article for which studies were used that had been retracted from manipulations. That is a clear criterion. Although even here, especially as long as we don't have open data and code for all empirical research just because a study was not retracted from manipulations doesn't mean they're random. It is however an interesting idea to check if our retracted studies different language was used and maybe like that one could even detect further ones that might be worth checking. With the replications I am more skeptical. What use does it really have if some algorithm says based on the language compared to previous replicated studies there are some 70% chance that a study is replicable or not? In the second case here, researchers search for replications. They are not very open about how exactly they did that, but at least they could identify some further replications. Unfortunately, they then use the data they had obtained to estimate determinants of which studies receive a published replication and drew such an ill-suited sample for this that they ended up with the opposite of the result that can be quite easily seen from the data. If you click on the link to their study it brings you to the respective page in the wiki where my comments on this are listed as a replication. There is this debate whether open data leads to researchers rewriting on the data of others that they do not fully understand and then producing questionable research with it. It is my experience, yes this happens, but in my opinion this should not be used as an excuse not to open data because that leads to the much worse state of science in which practically no published result is easily testable and thus reliable. What is needed is better post-publication review and discussion to root out errors and build consensus. We now plan to massively expand our project to include all empirical work in various social sciences fields that was published in major journals that keep online archives of data in code. For a related project, colleagues used machine learning techniques to collect information about studies in machine learning. They made quite some money with that. So we got inspired and now plan to read in data from journal websites and bibliographical databases. We already have code written in Python to create wiki pages in an automated way. What we now need is self-learning algorithms that detect relevant content from the available information and bring it into a form that is helpful to our users. What we are looking for is developers who have us to improve the wiki architecture or to generate content with machine learning techniques. We are currently raising funding and turn towards foundations, academic institutions, but also investors, as we saw with the related project, that work like ours has quite some value on the market, as one can also see from the usage numbers. You as a first step could just register to show your support. Ideally you could add content or work on the software, or just make a small contribution and vote on which studies you think should be replicated. I thank you very much for your attention. If you are interested, I have some further slides with literature that can be downloaded, but now I hope for a lively discussion.
The ReplicationWiki provides an overview of published empirical studies in the social sciences with information on data and code availability, data sources, and software. One can search for keywords, Journal of Economic Literature codes, and geographical origin of data. It informs about 670 replications, that is studies reanalyzing previously published results, as well as corrections and retractions. The wiki helps researchers to compare their results to those of previous studies. It is a resource that helps to identify useful teaching examples for statistical methods, replication and studies of social science. It allows advanced students and practicing researchers to find guidance on how to publish their replication research in various journals. A collection of teaching resources, useful tools, and literature helps instructors to integrate replication into their teaching and students to integrate open science practices into their own research. With the ongoing expansion of the wiki, currently covering more than 4,500 empirical studies, it is becoming an ever more powerful tool for social science research and education. It is a crowd-based platform where users can add their own replication results, suggest studies that should be replicated, and identify for example further data sources used in the empirical studies, especially ones from countries underrepresented in the literature and for whom economic policies are thus difficult to investigate. The Wiki uses Semantic Media Wiki technology that is evolving, and a number of technical improvements in terms of usability and database structure are planned. A massive expansion of the content is planned based on machine learning and natural language processing techniques identifying the relevant information from the available data. For the technical improvements further expertise is welcome, and for the content expansion developers and researchers from all fields of the empirical social sciences are invited to join.
10.5446/53340 (DOI)
Hi, I am Xavier from France. I am a contributor in the Explosive Invisible Projects from the NGO Tactical Tech and I am in my kitchen in France. As you can see, let's talk about reverse engineering as a crossword for investigation, variance, and open tools and technologies. What is reverse engineering? Reverse engineering consists in studying objects or methods sometimes in order to determine its internal functioning. Reverse engineering consists in identifying a precise case, making a recognition disassembling pieces by pieces and step by step. Then understanding the mechanism and finally reassembling the object of the method with a new value proposition in its operation. In the free software world, some people know software like Rata2 or GNU Project Debugger or many tips by hundreds of common lines in the practice of reverse with embedded tools. So used for open source intelligence. Who are the current usual suspects in reverse engineering? And then using them requires some time to tame and then learn the associated techniques and methods. It also requires meticulousness and precision in their use. And let's be honest about the thousand frustrating mistakes to reach a level that provides present satisfaction. Software developers, also academics and engaged citizens, journalists and activists need a final social configuration to collaborate in peer-to-peer contribution of this software and their documentation on their learning and the associated knowledge. And this is also an issue of new ways of producing free knowledge and I think the best part and also the power to liberate knowledge. Reverse engineering is not limited to software. Some people who are used in a hacking festival might also think about the hardware of computer machine. We would like to show you that it's possible to practice even further than these two points. And that is the interest of the previous mentioned communities to collaborate on this area as well. That's why we launched IQEA, the Exposing Invisible Seminar. IQEA is a group of people with different backgrounds and knowledge in different countries. It stands for investigation questioning experimental approach. Now, let me show you some examples about reverse engineering. First of all, please keep in mind, do not trust any dark boxes. Now the first example comes from a cafe with a pop education. With pop education, you can do reverse engineering like you play biology for example. Let me show you. You do the headboard bar with when you bring people together who are not engineers, who are not scientists, who are not hackers, but who are motivated to investigate environmental issues that affect their daily lives and community life. You can simply rely on the popular education method to learn reverse engineering. These are some of the small everyday objects in order to divert them to serve the needs of environmental analysis. Even a simple sheet of folded paper can be used as a pedagogical and technical basis. And here you can do that while drinking tea or beer. Because it's not for you. It's a kind of introduction to the investigation. You can find some information into exposing the invisible kids on the web. Now let's have another point of view with Clio. Hello, I'm Clio and I'm a member of the IKEA group with Xavier. And after a few weeks of reading and researching, I conduct some interviews with experts such as scientists and designers in order to understand better what's behind reverse engineering and how people apply to their works. In this process, I just discovered that people do not always consciously realize they're working with a reverse engineering approach, even thought they use the methodology every day. Only in the moment in which they reflect on the concept and they question them, they suddenly understand that their approaches are reverse engineering, but in different ways, let's say. For example, David Benke, that is from the Institute of the Diagram Studies and as a designer at Cryptpad, you think that reverse engineering is in the frame of creative practices and knowledge productions. So for him, it is not pretty technical, but reverse engineering is a way to look at the things by different perspectives. So for example, also the historical perspective is a reverse engineering tool. Another example is Oliver Keller, that is a scientist from CERN, studying the radiactivity and who made the DIY particle detector. For him, the main statement was looking at reverse engineering has a learning tool, so also has a way to understand the physics around us and the black boxes of natural phenomena. And one of the main questions he made was how do you reverse engineering randomness? Well, can we do reverse engineering on the randomness? What a huge question. Now another example, we were in a café with people extracting DNA from environmental samples. And now, did you ever try to reverse engineering a new in-planet test or actually blue test? Let's have a look. By doing it with a small team, you will learn together and also learn from each other. You will see that every teeny slice of the paper hidden in the plastic cassettes fills up a kind of technical, scientific and social question. What is the place of woman in society, in science and in tech? And in the history of science and tech. Are the reagents used in the test paper free or proprietary? What happens? If you reassemble the test for another use or just after the first attempt into a diagnostic? And guess what? We needed software to do that. One last example for the road. Perhaps you already heard or read the reverse engineering of several coronavirus vaccines. Yeah, do not trust any dark boxes. And then this reverse opens avenues of knowledge, investigation, studies, transparency and activities. Well done. Now, the main question is when will free and open source vaccines will be available? Yes, we run after knowledge, collaboration, citizen science, investigation, etc. However, that is not all reverse engineering as a method and other crossroads is also a new purpose. Instead of constantly extracting every resources from the soil around us, it's an ecological and strategic redirection, a redirection of knowledge prediction. And that's our invitation to you. Thank you and see you soon.
Reverse-engineering investigations implies to be able to reconstruct and experiment with the methods and information acquired in the initial process in order to visualize what elements are indispensable to how a "system" works. It also allows one to figure out what elements could be removed, replaced with more contextual information if the method can be replicated, or "hacked" to make the "system" or method work differently in other settings. Methodology & Major Findings: We are experimentally trying to revisit the practice of reverse-engineering to explore these possible and effective contributions in the case of investigation (journalism, activism, science, art). We engage with different techniques, tools and methods along with the individual practices of those working at the new frontiers of investigation. We all come from very different research and professional backgrounds, use different methodologies and techniques, investigate very different things. From a software perspective, reverse-engineering is used for: - industrial espionage - cracking - exploit creation - security audit - bug correction - malware analysis - interoperability - scientific study - fun.
10.5446/53341 (DOI)
Hello and welcome to the schema collaboration talk here at FOSDEM. This talk will have about six minutes more or less of presentation slides and then it will have a demo of schema collaboration. So something so with me, I'm a soft engineer, I'm not a scientist or data manager, we'll be talking about scientists and data managers later on. I do applications on Python and Django currently, before in other technologies. These applications, schema collaboration is a web application developed in Django. And for many years I've been writing software that is used by scientists, researchers, data managers and all this type of community. I'm also, at the moment I'm a part-time software engineer at the Swiss Poly Institute and is in doing this contact that I've missed having something like schema collaboration. And well, here it is. Also I'm a serial FOSDEM attendee for many years. So it is frictionless data. Frictionless data is a project from the Open Knowledge Foundation and it has a set of JSON schemas that help documenting data. And when I say data here, I mean either data packages, so a set of files. It can say the size of the files, hashes, titles, descriptions of these files, author, license, keywords and it also helps discovering the contents of CSV files, which columns contain the CSV file, which kind of format the date, time is, or other information. Also it helps, it has a set of tools to help describing, validating and working with data. So for example, we have here data package that JSON, this is what schema collaboration helps creating. And here we can see that it has the file is a data package, it has a list of resources, each one is a file or it can be a local file, it can be a remote file, it's in the path line and then they have a name, they have a title, description, format and in this case this example has a tabular data resource profile, because it's a CSV file that we'll see soon. And as I said, it has a list of files here. The CSV file, we know more or less what it looks like, but it has column names, date, time, VTC, latitude, longitude and it has data for each of these files. This is a simplified version of the file that I'm linking below. Here there's a table schema that JSON, in this table schema there's the list of files, fields and for example if we get the date, time, column name, this is represented in the name property here or we can get the date, time, format and we can specify the format that comes here, same for latitude, has a name or the units would be the renors. I've also specified here the maximum and minimum numbers of latitude. All of these can be validated, so we have tools that if a row has a latitude that is bigger than 90 or minimum than minus 90, frictionless data tools would say that this is a not valid file or not valid schema. So to create frictionless files, we can read them by hand, JSON files or we could use frictionless pi, we pass a file and it creates the schema of this file for this file or we can use data package creator work interface that is another way of working with data packages and looks like this. The data package creator is used by schema collaboration world series in a bit. So before and after or without and with. On the left we can see the data manager is sending emails to the scientists asking to describe certain fields and the data package in general. They collaborate with each other and the data manager would create the JSON files maybe with the data package creator by hand or by some tools. The data manager might be working with many scientists at the same time so I have two here but maybe it will be 15 or 20 and there are many data packages being documented at the same time. With the schema collaboration, all the interaction would happen on the schema collaboration software so it's easier to keep tabs on what's happening, who's waiting for who and the scientists have an easier way to input data and see what's missing and the results of everything in real time. You can export in JSON or you can export smart down and PDF as well. So next let's see the technologies very quickly. It's done in Python and Django. It can work with SQLite or MariaDB as a database and can be deployed using Docker or other ways that Django applications are usually deployed with. Here we'll see a demo in the next step. So now let's move to the demo time. I'll be using this URL. You can go there as well. That's a public open instance so anyone can test the system. You can do different things here but let's start pretending that we are the data manager and so we go to the management area. And the username is Alice the data manager and the password is Cliptonless. I talk often about the data management, data manager. This can be anyone who wants to set up the collaboration initial process. It doesn't need to be any data manager. So here we see a list of data packages that are being worked on but for this demo what I'm going to do is create a new data package. And it opens straight away the data package creator but this one has these external buttons save to server, lock from server and exit package creator and the standard data package creator doesn't have. For example what I'm going to do is create a package that will be, the name will be CostimTox, the title CostimTox as well. And for example is a tabular data package and I'll say the list of CostimTox. And the home page for example, this is optional, I'll choose a license, CCY consider it fills this and I could add keywords here. I'll add a resource and here I can say that this is a tabular data resource. I can say that the encoding is UDF-8 for example and the CSV and this is the CostimTox file for example. I am not a professional data manager as you can see but I cannot fill here. This is a fill name so what's in the CSV and I'll say talk or give a top name, the title the center of the talk and the description the person that presented the talk to link with this person. The Data Type is a string here and no default format that we can choose so we would choose other things. And I'll add only another one, I'll do actually the date start, start time, I'll have the start time here, the description time that the talk starts, the data manager which is my role at this moment knows this as date time and the format I'll leave it as default now. And I'll leave this blank, I'll only know this number but let's say that I don't know what is this field exactly. When I finish I can do a save to server, it's saved, I can exit the perpetrator and now we have here a new dataset. What I'm going to do now in the options is to manage this data package and for example in edit manage I'll say that Bob the scientist is going to help these two Alice and Bob the collaborators. Here in this view I can go to the preview of a collaborator view or I can copy the link, I'll copy the link now, we'll see what's this in a moment and I could download this in the future less schema in Marginal or in PDF. So now I'll open the climate window and I'll paste this link. Now I'm the scientist in this place, I can add a comment, for example I can say Bob please add the description of all the colors. And I add a comment here, it's a comment saved. Now I go to the copy link and I'll go into a new tab, this could be a different browser that needs to be logged in. The URL has the UAD so that's how we know that who has a UAD can access this. So now I'm Bob here and for this package it says okay, there's a comment here, the description of all the colors, okay I'll press edit and I see that I need to add the subscription and I'll say that this is the number, oh we said the DvS, number of people that started listening to the talk and the title it can be please us or something. Plus it's a number that actually is an integer, numbers in fictionalized floats so it's with a half people. So I'll save to server and then I'll exit the data package creator and say hi Alice, done, thank you very much. So I add a comment here, done and Alice can go to the data packages and see that there have been some change here so I'll edit the data package, oh fantastic Bob did what he needed to do so I'll exit this in order to manage the data package and I'll say that this data package is completed. I'll save this and actually if I want I can download the PDF from here, I'll open this PDF using Firefox and we would have a bit of, well I didn't add contributors keywords version but we can see that we have the Fossum talks that has these resource with this information and has these fields for example and that's the very quick demo that they had for today. Thank you very much for listening, if you have any questions I'll be happy to answer them next. See you.
schema-collaboration is a tool that helps data managers and researchers to collaborate on documenting datasets using Frictionless Data schemas. It uses Frictionless Data Package Creator and allows the collaborators to create and share dataset schemas, edit them, post messages and export the schemas in different formats (text, Markdown, PDF). The tool is implemented in Python and Django. The talk will consist of a brief explanation of Frictionless Data schemas, how data managers work with researchers and then I shall do a demo of how the tool can be used.
10.5446/53342 (DOI)
Hello, everybody. My name is Asura and welcome to my talk called Metrics in Context. I'm presenting work in progress and I've used the QR code who knew that a pandemic would bring back QR codes. Use the QR code to link to the GitHub repo. So if you see anything interesting, be sure to check it out but also feel free to reach out on Twitter. And without further ado, I want to jump into my presentation and first of all share an alternative title that I did consider but get rid of pretty quickly. Yes, we're talking about metrics, scholarly metrics in this case and as I'm assuming that most of us work in academia or used to work in academia, I assume that everyone's pretty familiar with of course the citation count but also the age index and this whole host of all kinds of scholarly metrics that we use, deploy and encounter in everyday life in academia. Also, obviously everyone knows that some of these metrics are used in ways which they are not meant to be used and these challenging problems, these problems also cause broader issues that the responsible research metrics movement for instance is trying to address. And in this talk I want to focus on two dimensions, on a dimension that two of these pretty famous manifestos also address. So the San Francisco Declaration on Research Assessment for instance recommends to be open and transparent by providing data and methods used to calculate all metrics. So this is addressing the transparency and the openness around the data and the ways we compute those metrics in the first place. Similarly, the Light Manifesto which is a more recent and newer guideline also recommends to keep data collection and analytical processes open, transparent and simple. I especially like the formulation by the Light Manifesto because it emphasizes data collection and analytical processes which really shows that we are talking about processes and procedures rather than simple objects and where they come from. So what does it mean if we go back to our slide with all the scholarly metrics that we are familiar with? An obvious first step would be to look at where these metrics come from and once again assuming that we all, that I am mostly talking to academics, I am pretty sure that a lot of you will be familiar with a good selection of these data providers. There is the Web of Science Scopus, so traditional publishers and citation indexes but also Google Scholar, Microsoft Academic, two very, very big commercial providers of citational data. We see open solutions coming from for instance Wiki data but also Crossref and the initiative for open citations and we also see commercial newer projects like Site AI or even Alkmetric. It is very interesting because we not only see a broad range of governance models, business models but also a broad range of kinds of events that these companies are all capturing. Not only traditional citations but Alkmetric is looking at the social media side of things, data side specifically focused on data sets. So what that means is that we are looking at very different citational events that are of interest and that are captured into traces by all these companies. And then these traces, these citational traces are then transformed into what I call patterns, some of which might be metrics that we are familiar with, some of the others might also be just simply other kinds of data and useful information. What I am proposing in this talk is that we should pay more attention to the tracing and patterning. So these processes that happen between these familiar entities of citational data. For instance, even the H index just keeping track of the time. So for instance, the H index that everyone is pretty familiar with and encounters quite frequently as it is hosted on the Google Scholar pages is also provided by the Web of Science and Scopus for instance. And as the H index has an agreed on mathematical definition, we can know that all these traces are treated and transformed in the same way and calculated. However, what we know is that the citational events and their contexts are different. For instance, Google Scholar uses the host of all their index pages, whereas the Web of Science and Scopus obviously rely on their index published articles, which does not mean that the Web of Science and Scopus provide the same H index as the articles differ that they have. So what we can see is that in the case of the H index, while the patterning might be the same, the tracing, the process of tracing are very, very different between Web of Science, Scopus and Google Scholar, but also even between Web of Science and Scopus. So to sum it up, I'm basically suggesting that in the context of Scholar metrics and provenance for these processes of citations, I want to focus on three aspects. So the context of the citational events, by asking what are the context of capture citational events, we can make sure that we're talking about the same kind of event, the same kind of context where these events, where the data is coming from. By focusing on the tracing, we can make sure that we're looking at similar ways of transforming these events into data traces. And finally, we're asking about the traces and how they are patterned and transformed into what I call patterns. We can also make sure that this step is addressed and described whenever we use a Scholar metric. So doesn't that basically mean that I'm just talking about a fancy metadata standard? Yes, that's exactly the case. However, one of the reasons why I'm here is that I'm trying to not create yet another standard, which is where frictionless data comes in. So frictionless data is, I've copy pasted their own description, but I will try to just simply give a brief understanding of mine. So frictionless data is a suite of pretty useful tools for anyone who works with or creates datasets, especially tabular datasets are really supported well in their toolkit, their language agnostic, which is beautiful as the base functionality can be expanded to support different languages. And frictionless data does this mostly by providing what they call data packages and table schemas, which can be used combined to describe coherent datasets. So in this case, we could describe the process of tracing and patterning involved in Scholar Metrics, which is amazing because in scientific and bibliometric research, as well as the application of Scholar Metrics, which is typically research evaluation, the ways we handle our datasets are quite similar. The structures are the same. And as I just argued, the underlying processes are also the same. So this seems like a wonderful case where we could use the functionality provided by frictionless data and their data packages and combine it with some provenance for Scholar Metrics to not only keep data collection and analytic processes open and transparent, as the Leider Manifesto recommends, but also to make it easy to determine if Scholar Metrics are commensurable and comparable in the first place. And by doing so also start thinking and talking about the invisible parts of Scholar Metrics that we often overlook and take for granted and manage to talk about these. Thank you very much for listening. I'm looking forward to questions.
Google Scholar, Web of Science, Scopus, Dimensions, Crossref, Scite.ai, ... What used to be the home turf of for-profit publishers has become a buzzing field of technological innovation. Scholarly metrics, not only limited to citations and altmetrics, come from a host of data providers using an even wider range of technologies to capture and disseminate their data. Citations come as closed or open data, using traditional text processing or AI methods by private corporations, research projects or NGOs. What is missing is a language and standard to talk about the provenance of scholarly metrics. In this lightning talk, I will present an argument why we need to pay more attention to the processes of tracing and patterning that go into the creation of the precious data that determine our academic profiles, influence hiring and promotion decitions, and even national funding strategies. Furthmermore, I present an early prototype of Metrics in Context, a data specification for scholarly metrics implemented in Frictionless Data. Additionally, the benefits and application of Metrics in Context is presented using both traditional citation data and a selection of common altmetrics such as the number of Tweets or FB shares.
10.5446/53343 (DOI)
Welcome to this fireside chat. I am Erik Borra and I am joined by Bernard Rieder and Stein Pethis. All three of us work as assistant or associate professors at the Media Studies Department of the University of Amsterdam. Apart from being a humanity scholar, we also write software for research. Our most well-known tools include DMITCAT, which allows one to gather and analyze Twitter data, the YouTube data tools for gathering and analyzing YouTube data, FORCAT for the retrieval and analysis of more fringe media such as 4chan, Reddit or Bitude, and the now defunct Netflix application, which allowed one to query Facebook's open graph. Today's talk will focus on why we make such tools and the principles we adhere to when making them, as well as future challenges. But let me first explain why humanity scholars would even think about writing software for studying online platforms. So in the past 10 to 20 years, a lot of social and cultural activity has moved on to these platforms. With staggering user numbers and a plethora of contemporary issues, such as activism, fake news and disinformation, or even deep platforming, it has become clear that such platforms should be scrutinized also from a humanity's perspective. These platforms, however, are operated by private companies and are thus black boxed and opaque. And without inside access, it is very hard to know what happens on or through these platforms. We do need a way to get to this data differently, and that is exactly why we made these tools. So one thing that we do need to be aware of when making these tools, but also just when working with them, is that they will always mediate the research that is being done with them. And they do that directly because they give us the data that we work with. But they also influence it and mediate it less directly. For example, by shaping the very questions that we ask to begin with, since those questions will often follow what is, but also what is not possible with the tools that we have at our disposal. And today much of the work in this area is done in research, software engineering for the more computational fields. And as humanity scholars, which we are, our goals are often somewhat different from theirs. For example, whereas a computationally oriented scientist may be looking to generalize a particular data set into an abstract model to then make predictions with. In social science and humanities, our approach is often more oriented towards which descriptions and then the subsequent interpretive analysis of those descriptions and the data it comes from. And that leads to different kinds of research questions, but also different approaches to answer those research questions. And then finally that has repercussions for how we build the tools that give us those answers. So in our field particularly, which is media studies, we often analyze the objects of our research from a material perspective. So technologies and data platforms all have their own specificities. And those specificities inform research or they can even be the focus of the research. And to that end, for example, we do not only use APIs to acquire the data with. But these APIs can become objects of researching themselves and the objects produced through these APIs. So tweets, videos, posts, they all have their specific attributes and they're ranked in specific ways. And that can be studied to make sense of those objects, but it can also be done to analyze and critique the APIs and the materialities that produce them. That approach requires a certain, let's say epistemological flexibility, where rather than having one paradigmatic method that can be implemented in one software tool and then reused in different projects, we like to say that we follow the medium and we can adjust the approach to the materiality of the data that we're working with. In that sense, well, to some extent we can actually build a method into a tool. In practice, we often work with what we call recipes. So those are series of analytical steps which involve the use of tools in some cases, but other steps require interpretation on the part of the researcher based on the data produced by these tools in previous steps. And that we can then use that to make an informed choice about what to do next. And these recipes are often, they do often correspond roughly to a canonical research method, but at the same time this approach that leaves their researcher room to adjust their project when it's needed. And something like that is harder if a method is implemented in research software directly. It's one explicit analytical pathway. So what does that look like in practice in our work? One thing that characterizes our approach is that it's often very modular in that rather than one monolithic tool, we have a variety of smaller analytical tools and modules, sometimes very simple that can be combined in different ways. And of course that requires that each of these steps and each of these smaller tools that you can mix and match and that all of them produce reliable and reproducible results because when you change tools like this, the result is only as strong as the weakest link. So to that end, making our work open source is quite important, but just making the source code available only helps people that can actually read the source code. Well, many of the users of these tools are precisely the scholars who don't know how to program because we're in the humanities. And that is why next to just putting the code on GitHub, annotation and documentation of the code should also be part of open sourcing research software. So making it clear to user what the code will do in human language and providing references to documentation of the API or system that it interacts with, referencing a relevant technique if it's also written up elsewhere in scholarly writing and providing these recipes that show in what context the chosen software makes sense and what kind of data it can but also cannot be used for. Yeah, and I mean, you know, we've been working on these principles and strategies already for quite a while, but there's also a number of contemporary issues that that we're facing. And I just want to highlight three of them that have been, yeah, have become particularly relevant for us in recent years. Maybe the most important one is the changing relationship and changing to more complications with large platform companies and their web APIs, particularly after the Cambridge Analytica scandal, API access has become more precarious. Excellent runs cost is the API calypso. And this includes APIs that simply close, right? So for example, NetVis was a victim of Facebook closing down API access, but this can also mean account suspensions or, you know, API key suspensions. At the same time, new possibilities are opening up. For example, the social science one initiative in the US where the social science research council broke it data access for Facebook data, or for example, the, you know, specific track for researchers in Twitter's new API. But these new possibilities also have like new bureaucratic requirements, and they cannot easily replace, you know, the kind of data that we've been getting through APIs. And indeed, in these kind of contexts, our tools also may have to be adapted. There's also a renewed debate around scraping, right, which basically means that instead of, you know, grabbing data through an API is to load the HTML interface content and then directly parse it. But around scraping, there are also quite important questions about the legality about the ethics and even around the practicality of scraping. There are, you know, what protection measures in place. And so there are, you know, questions that remain to be solved. And in some cases, they may not be solvable in a sense that researchers may actually be unable to publish results based on such a method. A second set of concerns, concerns, privacy and legal compliance. And here the GDPR is particularly important. It does provide some leeway to researchers. So there are, you know, some exceptions within the GDPR that actually, you know, allow us to get maybe a bit further than a marketing company would. But there are also new requirements. For example, the capacity to remove users, specific users easily from existing data sets. At the same time, academic guidelines are also changing. And there are new requirements that come, you know, through institutional ethics boards and other bodies. And that may be, for example, that scholars have to secure access to these data to really make sure that nobody can gain access to it, or there may be obligations to archive data sets. And for us, the question in this context is, of course, how to react to these challenges in the design of our tools, right? So we're thinking about things like deletion interfaces, about encryption of data, also maybe even about automatic upload to services like FiX share. And the third point, the last set of challenges or concerns is that the web itself has been changing over recent years. And the status of web data changes with it. So for example, platforms, as we all know, have been de-platforming users. They've been deleting posts. They've been withholding certain data. And in some cases, platforms may simply disappear entirely, right? So just one week after a Stein implemented an import module into Forecat for a Parler, Parler simply disappeared. Now it's back again, right? But these are examples for the kind of precarity that we are facing. And this phenomenon, first of all, we have to understand them before we can even react to them, right? Then our tools and then possibly also translate these issues so that our users can also react to them. And with the web overall becoming more dynamic, the question of scheduling, for example, is becoming really important, right? So when was specific data retrieved? We're no longer in this situation where we can simply say, we use an API to gain access to some static database that is only growing and nothing else, right? So that's another vector of questions that we have to deal with. And these are just some of the recent issues that we've been facing. And indeed, they do require additional work and potentially also additional skills that a group of research engineers may or may not have. So with this fireside chat, we introduced why, as humanity scholars, we write software and how we do so. And then we addressed some of the new challenges that are popping up. And concluding, we'd like to argue that the rule of the research engineer or the role of the research engineer should be expanded. Programming is just one part of research and that needs to have an intricate connection with methodology and users. So as research software engineers, we need to have one foot in research, one foot in software, but also one foot in education and one foot in strategic planning. And we are fortunate enough to have received a Dutch grant from the platform, Digital Infrastructure for Social Science and Humanities to start addressing these issues. And the project is called CAT4SMR, which is an acronym for Capture and Analysis Tools for Social Media Research. And it gives us five years to work on these issues and to acquire these new skills of lobbying for access, et cetera. And if you're interested in this project, please go to CAT4SMR.humanitiesofuva.nl, where our tools are listed, as well as our social media presence and where you can get in touch with us to further discuss this. We warmly invite you to approach us if you're also struggling with some of these issues, such as lobbying for access, adapting to changing infrastructures and requirements, like whether it be maintenance, legal issues, workloads, financing, citation, or even sharing of data. Do not hesitate to get in touch with us. And rather than glossing over such things, we suggest to make them actually part and parcel of future research projects, and to thus not forget to write them into our research proposal. Thanks for your attention, and we're very happy to further discuss this with you in the following Q&A session.
This talk will focus on our experiences with making open source tools for the study of social media platforms (amongst others, DMI-TCAT for Twitter, the YouTube Data Tools, and 4CAT for forum-like platforms such as Reddit and 4chan) in the context of social science and humanities research. We will discuss questions of reliability and reproducibility, but also how tools are taking part in shaping which questions are being asked and how research is done in practice - making open source particularly relevant as a form of methodological transparency. Two aspects have become particularly important for our tool-making practice: the relationship with large platform companies and their Web-APIs as well as concerns about user privacy and legal compliance with regulations such as the GDPR. Our talk will address these in turn, scoping the issue and proposing ways forward.
10.5446/53345 (DOI)
Hi everyone, I'm Niels and I'm a postdoc at the Max Planck Institute for Ice and Forchung in Dusseldorf. I'm also part of the BigMax Research Network which aims to develop big data driven material science. So today I'll present on my personal use of ELab FTW for my experimental material science workflow and since you probably already heard from Nicholas, the developer of ELab FTW, you have some ideas so this will be more of a use case presentation. A little bit of background, basically the vision if you want to get funding is we've entered a new era in material science namely data driven materials discovery. The idea being that we can reuse and kind of bundle our data that we have to accelerate the development of new materials. The problem is that we currently don't really manage our research data all that well and what I hope to present in this presentation is perhaps a partial solution is that we can use tools like ELab FTW to improve our data management. So what is the problem? The problem is that our labs, at least in material science, I don't know about other people's labs, but our labs are pretty chaotic. So the way that new knowledge is produced is we first start from lots and lots of lab resources like samples, data, instrumentation and all of these are sort of interconnected in a very complicated graph. We have samples that are related to each other in a hierarchical way so you make samples from other samples, etc. You use various instrumentation to produce different files in different data formats and all of that gets sort of funneled down into easily digestible figures and stuff. But if you want to go the other way, so if you want to find out what the details on a certain data point then usually that's impossible because oftentimes we don't really keep track of all of these interrelations. And so basically once the person who knew the overview and the connection between these resources once this person leaves the data in the sample becomes completely worthless. And so for my own work I had been thinking for a long time on whether I could find a solution to this problem and then I stumbled upon ELab FTW which does a pretty nice job of getting me to organize my work like I want. And so the way I use it is I basically only use the database function to store everything that I think is let's say noteworthy as an item in the database. So I have samples and experiments and sessions so I mostly mainly do microscopy so there I store the information about that. And I also have things like procedures and things like that. But let's just take a look at one of these samples over here so I can demonstrate how I've got it set up. So inside a sample I'll have a short description and like a thumbnail on what it looks like so I can recognize it. And then I'll have information about where it is but also the information about how it's related to other database items in the form of just links basically. So I have created another sample from this sample. I've done a certain experiment on this sample so if we go look and go over here you can see that I've created this transmission electron microscopy sample from the other sample. I also know where it is and it's linked back to that other sample as being the parent. So we can look in here we did a microscopy session on this sample and this is then basically like a notebook right. So I document which kind of goals I had during this session. I say which techniques or procedures I use which instruments which samples and then it's just saying what I did. And I also upload low resolution thumbnails of the different files that were produced so I can go back and quickly figure out okay this file is that and I can search through it. And then I also have links to the raw data in here. So the way I store it now is currently I'm storing it kind of on a network drive here in my house which is just a drive connected to a Raspberry Pi. It's not ideal of course because once I'm outside my house I can't access this but at least I know that for now I could go back if I have a sample. Everything is now connected the way I want. Data files, previews of data files, how the data was obtained, the samples, etc. And that's kind of the gist of it really. So that's how I build up my database and kind of all the knowledge and information that I have. And in many ways it's kind of similar to a blog but it's a very powerful blog. So first of all I mean this I don't really use that much all the authorization functionality because I'm the only user of this instance but that's pretty powerful. And a second thing which is quite powerful is the Python API. So one I built a very small project to use this Python API to create QR codes and I want that because the samples that I have I want to be able to easily go directly to the core corresponding ELAB FTW page from the item. So I want to be able to scan a QR code basically with my phone. And to create that I built this thing and it's just a small Python tool. It's available on a PyPI and I've installed it on my Raspberry Pi upstairs which is connected to like a label printer so I can basically from here directly send it print instructions. So for example if we now go in here let's say we want to list the items just to show you that okay. No list ELAB items it is and we want to search for things that I don't know have platinum in it PT and we want in the category of sample. So then it will give me everything where the PT is in the name the title here so you can see the correspondence and it will give me the date and the ID and then if I want to create a label for example for this sample over here then I would do print, print sticker and then just the ID number. Now I don't want to run upstairs and get the sticker so I've already created the sticker and then it looks like this and basically then I have a sticker I stick it on my box and that will if I scan it bring me directly to the right ELAB FTW page. So in my opinion that's already pretty powerful I can connect the real world to the digital and there's like a strong persistent link between all those things as long as I'm keeping things up to date. So in the future I actually hope because creating these database items can be quite tedious manually I have to do the links so I'm thinking of also creating something that would for me automatically create certain database entries or fill out some information at least automatically and this is perfectly possible with the API so that's pretty powerful but I haven't done it up to now but I hope that something like this gives you an overview of what ELAB FTW can do and how you might be able to apply it to your workflow. So thank you for listening and I hope to see you later.
Keeping work organized in experimental materials science research is a nightmare. Projects involve data collected with dozens of different instruments on dozens of different samples that are related to eachother in a hierarchical fashion. For each new project, researchers struggle with questions like: how should I organize my files and data? How should I name my samples? How should I keep track of the links between data and samples? Since no standard answer to these questions has been formulated, labs and individuals just improvise. The result is that most data and samples become utterly useless once the person who conducted the research leaves; no one else can find their way through the ad-hoc naming conventions and various excel sheets. This eventually translates into a lot of wasted and repeated efforts. The biological sciences have long ago figured out solutions to these problems, and they are lab information systems (LIMS) and digital lab notebooks. In this talk I will present how I organize my research workflow in eLabFTW, a free and open source lab notebook with some LIMS capabilities. The tool was originally developed with molecular biologists in mind, but most of the tooling is useful for materials scientists. I will also talk about how I could leverage the python API for tailoring the tool to my needs, for example for printing QR-code stickers for database items.
10.5446/53346 (DOI)
Hi everyone, I'm Barbara McTabannou and I'm a research engineer and technical director at MediaLabs Transpo in Paris. I'm going to present you a brief and therefore not exhaustive overview of web crawlers used in the social sciences context. Then I will show you a quick demo of our own web crawler, HYBO, which we just completely redesigned and released as version 2.0. But first, let's talk about web crawlers. So does anybody not know what a web crawler is? I would usually call for a raise of hands, but hey, it's 2021. So let's assume that some people don't. So a web crawler is a software that will visit and collect web pages, find links within them and automatically visit them all, either systematically or selectively following a certain set of rules. This is what all search engines have been doing since the early times of the internet to build their indexes. But it can also be used to collect some data, text contents, dirty network communities, etc. So why would anyone want to do that for social sciences? Well, the thing is that HTML, as it was conceived by Tim Berners-Lee, is entirely revolving around the concept of hyperlinks. When a webmaster had the link, it is most of the time a manifestation of his intention to express some proximity. So these links are carrying meaning and they create some structure, which are definitely worth studying for social sciences. Here are a few outputs, for instance. Here on the top left, you can see a network of links between websites, which were crawled because they talked about climate change and they were categorized depending on whether or not they believed mankind is responsible. On the right is a structure of citations between media's websites, showing that the main reliable media's are being cited among themselves and by all others, whereas fact news and conspiracy theorists cite them, but they don't get links to them in the opposite way. Here is a map of actors speaking about privacy issues on the web, and here is a categorization of 460 French media's using a stochastic block model, which takes into account the direction of the links to compute communities. So how does it all work? There are plenty of ways to program a crawler, but some strategies are more common than others. The most typical one is called snobody. We basically put in a queue all links we find in every page visited, then visit them all, one by one, up until we reach a maximum number of pages or maximum distance from the first year I've visited. But since the web structure itself is in layers, this approach commonly results in a strong attraction to the biggest actors, which are Wikipedia, Google, Facebook, Twitter, etc. Since almost every website has links to them. So an alternative option can be, for instance, to do the same with some human curation, which where each crawl returns a list of sites to consider, and the user decides whether to include them or not, and then starts crawling. Another typical strategy is called focus crawling, where you only follow links which respect a specific focus of your interest. For instance, we could start a crawl on the first-day website and keep only the pages in which we find the words free or open source. And we can imagine plenty more strategies depending on whatever you want to study, and this is why so many different tools have been built over the time to help researchers crawl the way. So let's review a few. Soxibot is probably one of the oldest, and it seems in a sleeping world since five years, but it was built by a research team in the UK. It took the form of a Windows GUI application, which looks kind of old now, but it was already allowing to set up many options and visualize the graph between the URLs. It was even asking for your email so that you could write to all webmasters of the websites you were crawling and inform them of it. Another quite old one, which was actually conceived by one of the organizers of this room, Mathieu Jacomi, is a navi crawler, which is no solid and we archived on MediaLab's GitHub. It was a Firefox add-on, up until version 3.5.1, which is quite old now, and it would allow you to build a corpus while browsing the web manually. Each page visited was analyzed to report the list of links found within it and propose a user to visit them and include them one by one to build progressively your corpus. Issue crawler 4 is the first research crawler with a full web interface accessible directly online. Built by our friends from DMI in Amsterdam, who were just speaking right before me, I think, it was for a long time the most switch and complete crawler for researchers, proposing a wide variety of crawling strategies and very clean outputs. Both of them is also a nice web application, which was built by another lab in Australia Assistant. And here comes Hype from MediaLab, which was already presented at FOSDEM in the previous years, so I will only remind its principle features. In addition to having a very nice user-web-user interface and letting the user control which websites to crawl on that, it has the specificity of allowing to define very precisely what a website or web entity is. For instance, we could consider each year's FOSDEM website as its own or merge them all together along with FOSDEM's Twitter account within one big FOSDEM entity. Which leads me to Hybo, which we just released until under its freshly redesigned version 2. Here we are back to desktop application, since it's an actual web browser based on Chromium's open source code base using ElectronJS. The main idea being to allow to build its corpus while browsing it, like in NaviCrawler, but with a real automated corp running in the background, being Hype. And now let's play quickly with it. So here is HypeWater interface, it's actually in French right now, so let's switch to English and let's connect to a Hype demo. You can actually create your own Hype server on the cloud by paying to OVH or Vexhaust or Citicloud, but let's use the demo for now. So we create a new corpus, let's call it FOSDEM and create it. So it will take a few seconds to create the memory structure on the server. And here it is. So we have a regular browser. You got tabs, you can create add more tabs, close them, and within one tab you can first let, for instance, search for FOSDEM on..go. Let's click here. So we have the result of FOSDEM, we can go to the FOSDEM website. And here, so we are in a regular browser, but we have a few more things here. In this column, you have information about the browser web entity. So FOSDEM.org belongs to suggestions and it has a status, name, known web pages, so pages that are already known for this website, linked entities, so far as none, tags, field notes, so let's add a tag. Let's say that FOSDEM is actually of a type event. And let's say that we want to include it. So there are three different status in, undecided, and out. Let's put it as in. Let's decide that we want to include the whole FOSDEM, not only 2021, then the home page of the website is FOSDEM.org and that web entity's name is FOSDEM.org. That's perfect. So we have included it and it's included and now it's telling me that it's crawling it. So the crawl will start. And in the meantime, we can start looking at all the things while it's preparing to crawl. So if I go back to here, and you can decide to visit some specific web entity that the corpus is suggesting me to do. So if I click here, we can visit the web entity in the corpus. So right now, that's only FOSDEM. The ones that are out undecided, they're zero right now. All the ones that are suggested for review. Oh, and here, there are some new ones that just arrived. So basically, all of these were found via links from FOSDEM. So we can try and start visiting them one by one. So let's click here and it will open to me the Apache website inside the current tab. So Apache, let's say that we want to do the corpus about free software. So Apache is definitely part of it. So we will include it and set it as in as well so that it will crawl the website as well. And then we can go to the next website. So backmarket.fr. I don't know what this one is. So it looks like something that sells smartphones. So I don't think we care about that. Let's pull it out. This one looks like it's not a good website. So let's forget about it and so on and so on. Let's include creative commands, for instance. Here, here, here, up. And we can continue. And so since we added creative commands, we will put it as a type is, I don't know, what is creative commands. It's organization. And then we can switch to the regular hif interface and it will show us a network. So right now it's not very interesting since we just started, but we get Apache for them and creative commands which are all linked, two by two. And if we want to show all the other entities that were discovered through the crawls, we can see a few of them. And some of them, which are links between the two, are even links by the tree of them, Twitter and Facebook.com, of course. All right, that's it for the very brief demo. And thank you and have a good day.
The World Wide Web’s original design as a vast open documentary space built around the concept of hypertext made it a fantastic research field to study networks of actors of a specific field or controversy and analyse their connectivity. Navicrawler, IssueCrawler, Hyphe... Over the past 15 years, a variety of web crawling tools, most often free and open source, have been developped by or for social sciences research labs across the world. They provide means to engage with the web as a research field or to teach students what the WWW is beyond Google or Facebook’s interfaces. We will first present an overview of this history of open source web crawling tools built for research, teaching or data journalism purposes. Then we will propose a short demonstration of the latest version of médialab's HyBro, aka Hyphe-Browser, a tool built to let users benefit from automated web crawling as well as in situ web browsing and categorizing. Its friendly user interface allows a variety of publics to engage with web crawling, including non-experts like students, social science scholars, and activists.
10.5446/53348 (DOI)
Hi everyone, I'm Jim Hall and thanks for attending my lightning talk about why your PC only has 16 colors and why is there a bright black anyway? So as I said, I'm Jim Hall and I've done a number of things in open source software, but one thing you might know me from is the founder and project coordinator of the Fridos project. Now if you've ever booted Fridos or any other operating system into plain text mode, you probably notice you only have these 16 text colors. I am talking only about text in this case. You're getting these 16 text foreground colors and actually only eight text background colors. Why only 16? Why only eight? And why are they called black, blue, green, cyan, red, magenta, brown and white? Where do those color names come from? Well, like many things in technology, it really comes down to compatibility. And with text color, it actually goes back to compatibility with the original IBM PC released in 1981, the IBM personal computer 5150 and actually specifically the IBM color graphics adapter, the IBM CGA. All the decisions about text color that you see today in 2021 actually date back to 1981 and the original IBM CGA. So to talk about color, let me briefly talk here about binary. So as you might know, binary is just a series of things that are on or off. It's like a light bulb. It's either on or it's off. And PCs, certainly retro PCs like to think in a series of eight things that are on or off. Each thing that's on or off is called a bit. And when they're eight altogether, that's called a bite. And so here I've represented these four different numbers using light bulbs or using little circles showing the numbers one, two, three and four. So the number one is just a series of off plus on in the one position. Two is a series of off plus an on in the two position and an off in the one position. If I have an on in both one and two, well, one plus two is three. And then at the bottom row, you can see it's a series of zeros except of an on in the four position. So that's how we can count in binary. You can count all the way up to 255 using binary in an eight-bit system. Of course, we don't actually represent binary using little light bulbs anymore. We actually just use the numbers one and zero to represent on and off. And so here I've got those same numbers, one, two, three and four represented in binary using zeros and ones. And that's really all you need to kind of understand why color looks the way it does. So let's talk about color. So I can take color and represent it as three different light colors, red, green and blue. And if I mix those, I can actually get a variety of different colors, right? So if I have red, green and blue all happening at the same time, I get white. And if I have all of them turned off, of course, I get black. And I can mix any two colors to get a variety of other colors as well. And in fact, if you look back on the original IBM CGA and the original IBM color display, each pixel was actually a series of red, green and blue dots. They were just so close together. They just looked like a single dot. And so you can see I've drawn a little square around each pixel that would have been on the screen. So again, every color was being represented as a mix of red, green and blue. Well, if it's red, green and blue, and I have them on or off, well, I can represent that through binary. It's either on or it's off. And so red, green, blue, RGB, I'm going to represent that as binary. And so each one of these binary numbers is really R, G and B. And that's why if I have 001, that means only blue is turned on. So I get a blue color. And if I have 100, that means I've only got red turned on and I get a red color. And if I have 101, that means I've only got red and blue turned on and I get sort of a magenta color. And if I have all of them turned on, 111, I get full white. And if they're all turned off, of course, I get black. So that's a great way. I can actually get eight colors. I can get zero to seven. I can get black to white. Well, IBM engineers were smart and they realized we can actually add an extra bit and we can double the colors. And so that's what they did. They added an extra bit. And so the rule was is that now we don't have just RGB. We have I RGB because the left bit now is an intensity. And so if I have the intensity turned on with a one on the right-hand side, you can see that those colors of 1,000 to 1,111, those are all bright colors. And so I've got bright black to bright white. And if I have the intensity turned off, so I've got zero, like I've got 0,000 or 0,111, well, I'm going to display the color at, let's say, some half intensity, some half bright intensity. That gives me just regular black through white. And that gives me now 16 colors. And I can probably stop here. Oh, wait a second. I actually don't have 16 colors because black and bright black are actually the same color because I don't have red, green, or blue turned on at all. So OK, we can fix this. We can do an adjustment. So we're going to modify that IRGB. And we're going to say if it's low intensity, if that intensity bit is turned off, it's set to zero, then certainly if any of the bits in RGB are set to zero, it's going to be off. But if it's set to one, I'm going to set that to like a two-thirds brightness with the intensity turned on. So it's a one in that leftmost bit position, then for any zeros in red, green, and blue, I'm actually going to turn that to a one-third intensity and any ones in RGB. I'm going to set that color to be a full brightness. And so that means that now I've got these 16 actual colors from on the left, black to white, and on the right-hand side, I've got bright black to bright white. And by the way, that is why you have bright black. And so now I've got all the colors that I need. I've got all the colors of the rainbow, except, oh, wait a minute, I actually don't have all the colors of the rainbow here. Because if you remember, your colors of the rainbow are actually red, orange, yellow, green, blue, indigo, and violet. And I can kind of fake it with the blue, indigo, and violet. And I certainly have red, yellow, and green, but I actually don't have orange. Orange wasn't on that list. So how am I going to get around that? Well, the IBM engineers are very smart, and they figured out a way around that. So they said, OK, for the low intensity colors, yellow is actually going to be orange. And we're going to do that by making green have like a one-third brightness. And so then if I look at this color range, I get from on the left-hand side, the low intensity black through white with yellow is no longer yellow. It's actually orange, although IBM had to mess it up somewhere. And so it's not actually called orange. It's called brown. And on the right-hand side, we've got the bright black all the way up to bright white. And that is the color set that you find on plain text. But why do we only have 16 colors? This answers why we only have 16 colors for the foreground, but it doesn't answer why we only have these eight colors on the left for the background. Why is that? Well, it actually comes to how the bits are represented in a full byte. So clearly I've got four bits that I'm using for full foreground text colors. So foreground, foreground, foreground, foreground, and that's the IRGB bits. The background colors, I only have eight of those. That's clearly going to be using only three bits, and that's just RGB. Well, what's that last bit? What's the last one that's going to fill up my full byte? Well, it's the most important part of user interfaces ever, and that's blinking text. So the leftmost bit is the blink called the blink bit. And if you have that turned on on a CGA system, your text will blink. And by the way, later VGA systems allowed you to reset what the blink bit did, and it would actually give you bright background colors. So if some of you are saying, oh, you actually do have these bright background colors, you're using a VGA system. But on the original CGA system, that was a blink bit. Anyway, thanks very much for attending this lightning talk about why your PC only has 16 colors. My name is Jim Hall. I have questions in the chat. Thanks, judge. Thank you.
Your computer only supports 16 text colors, and 8 background colors. Why so few colors? And why is there a "Bright Black"? This fun lightning talk will explain the origins of these 16 colors, and why the colors look the way they do.
10.5446/53349 (DOI)
Hello, welcome to this first talk of the retro computing the from. I'm Chris of Ponsard speaking from Belgium, actually from retro computing cave and you can see behind me some of my old machines and in this talk I will share with you some thought about digital preservation. Actually, the context is that I am volunteer working at the NAMI computer museum which is located in Namur, Belgium, not far from Brussels, so if you are in Belgium sometimes in the future and hopefully maybe next year at the next for them I hope it will be physically again at ULB. Please pay a visit so you know well our general mission to preserve aqua exhibit and also research about a number of computer pioneer computer and digital heritage in general. You can see the design of the museum, it's a old sports hall and everything is managed in containers and you know containers evolved in parallel with with computer that's why we have that exhibition solution. As museum you know we have a lot of preservation constraints. Actually when a machine joins the collection it's become very hard to pour it up. As a use case I will use the dye in data computer. It's an early micro computer from 1978. It was quite advanced from that time and it's very rare so it's a problem really a problem to to to use it and the question is how we keep the expertise so do we transfer programs and how do we show running software to to visitors. So of course we ended up with the idea to use emulators and record some videos and things like that but that triggers a number of questions so we have to look for emulators what are the interesting usage scenarios how to select a convenient emulator and of course as the emulators are themselves software how can we make sure that they will be maintained and usable in the future. So clearly we are moving here from physical to digital preservation. Just a quick disclaimer to tell you I'm still discovering this area and my main goal is to share so about digital preservation and emulation with a focus on retro computing. I will not cover legal issues and of course this view does not claim to be exhaustive. The outlook will be as follow I will have a general look at digital preservation then focus on emulator then look at use case and preservation tool chain before looking at long term preservation strategies for the emulators and then go to some open discussion. Digital preservation means we want to maintain digital object accessible and usable in an authentic way for a long term future and digital object can cover a large scope all kinds of documents programs including games and also social media. Our focus will be more on the program side and there are two complementary dimensions first usability we want to be able to read the document to run the code. One point is in some case usability could even be better for example skipping the long tape loading is of course interesting improving the resolution is also an improvement but it could impact the other dimension for example which is authenticity of course we want to preserve the look and feel want to preserve the experience if the experience is better in some way for museum it's not our target but for other use case like for gaming it's interesting. So this is a matter of discussion. There are several strategies for digital preservation I will quickly go through them we focus on retro computing the first for centralization preservation encapsulation and extraction are not so interesting they have a lot of drawbacks like well standard didn't exist or may not be maintained total preservation is very expensive and limited usage. Capsulation will generally postpone the problem and extraction will only be a fallback for degraded mode so the main strategies that I recommend are migration and emulation media migration is always interesting because you are transferring the information from older support unsupported technology to newer media like from tape to flowp to hard drive or cloud storage for documents and application the idea is to migrate them to the most recent platform version so to be able to use them it means that you have to do this regularly each time that there is some kind of update and of course one problem there is that you might lose information each time and so have some progressive degradation and lose the authenticity of the media or the understandability so emulation is another solution that has the advantage to preserve the original digital resource but to just emulate the platform the old platform to be able to read that in that previous state of course there are a number of issues that we are discussing here like the need to maintain those emulators to evolve them and of course those are not totally perfect last one universal machine I will come back to that later coming back to emulation we can define it as an hardware or software well in your case enabling one computer system the host to behave like another one which is the guest it can be seen like the kind of digital twin of that computer system and really the technical goal is to be able to run the software in change on the host system in order to have a look at some scenarios for emulator I will go through a timeline so while I'll come on the timeline just maybe try to see how they can relate to the past to the present and to the future and we will come to that just after this is the emulator timeline let me go through the main milestones Larry must go in the term emulation at ebn the 60s and the first emulator was for the 7070 running on the 360 in 75 allen and gates developed their basic by emulating the 808 on a pdp10 emulator also used at CPU level for example the 286 can run the emulate the 8086 in real mode and floating point emulators are also emulated in software because they are they were very expensive in the 80s you can see also that as the ibn pc emerged as standard platform other platform developed emulator for it like the amigant the atari in the 90s we can see the development of game emulators with for example first the nes in 1991 95 the gameboy you can see 97 98 mess and mame project starting and merging a bit later in 2015 those books 2002 a virtualization technology here have been used virtual box qme 2007 compilation to javascript 2012 adopted by internet archive a bit later and interestingly also at os level migration to support migration of hardware from poor policy to entail the rosetta project in 2006 supported a few years but did a few years and then now in 2020-21 we have the rosetta 2 project to support emulator to support the migration from intel to arm so the same story again so if you listen to me you could discover the following user scenario about the past digital preservation this is our use case from past to present we have back part compatibility again text change for the present we have market compatibility or cost reduction and for the future that was interesting because the altibasic case was to develop a new system before or because the hardware was not available going quickly through retro computing use case of course the most famous is retro gaming you also want to use specific software from the past for some specificities like none what you see what you get text processor which has been used by some writer for example star by uh so your martin from game of thrones third accessing old files and for that you might have to use also old software and the last but not least for all case computer history we study the past and also provide some experience to the audience public audience about that time in order to select an emulator for specific usage scenario we can identify interesting scenario we'll quickly go through some about some goods of them first about the ease of installation and configuration of course you don't want to compile to look for rooms to configure keyboards you really want something that is already configured or even that runs in a browser about the ease of use you don't want the row emulator you want it to come with a number of utilities for major management snapshot with a nice user front end about accuracy actually it's more a trade-off between lower fidelity emulators and higher fidelity ones maybe cycle exact that will require more processing power but the second of course are no privilege because of the growing the growing power and of course also better for the preservation about long-term support of course you don't want to rely on the very recent project with few people exacting technology you want to make sure it will be sustainable so have a long history a large community maybe adopt virtual machine approach as we will see and maybe other ones so actually the choice will be some kind of depending on the scenario some kind of balance between those criteria actually you don't have to select only one it's interesting to have more than one emulator but then comes the overhead to manage those emulators so usually one good idea maybe use one multi-system because they provide unified management if it's necessary if you need something that has some specific feature or that has more accuracy then you can use a second one. To illustrate it is a comparison of two emulators for the java cpc you have mamay on the left and java cpc on the right so as you know mamay is a multi-system emulator so in this case you need to install the ROM so configure the keyboard it's written in c++ meaning if you want to evolve it it's true compilation the community is very large and you can see the codebase mature and it's maintained very well since it starts java cpc well when you run it everything is perfectly configured you can see you even have a desktop with many utilities a virtual keyboard it's written in java so you don't really need to recompile it if you are sure that you can still use a virtual machine in the future and so in the end you can see that both are perfectly running my basic applications not really any accuracy problem there and of course you can even exchange a program to a virtual disk so those are perfectly can perfectly be combined. So I want to look for emulators of course there are hundreds of websites so I'm just giving some entry points for multi-system just go to the game tech wiki you will find all the main multi-system emulators for specific emulators well for computer the wiki page is very good for console you can also look at the game tech wiki for the friend speaking planet emu is really good now for those looking at emulation or as per the there is the retro pi in recall box and for web browser you have here a few pointers to a website that use that technology internet archive pcg and tini emus that's but not least well this is a fan page there are many fan pages but there is one for my use case for the die computer and then you really have to look to to find those those pages that provide very very rich content usually about the the history of specific computer systems once you have selected your emulator it's interesting to understand or it will interact with all the physical and digital artifacts you have of course as retro gamer you will only use the emulator and the worm but in case of a museum we have a lot of artifacts like the listing steps flow piece and this is an illustration of those artifact interacts for the die computer so we are using the mame emulator this is a physical die you can see paper listings you can see tapes from this mini tapes from the memocom reader flow piece and we have a number of utilities that enables first well once we have what recording to transfer them to be binary maybe to transfer to disk image those images can be done from the flow piece then load it into physical die using sd drive and of course then we can also interact with the emulator loading the web file or directly injecting listings from the scripting interface for this part you will you can have a look at my talk later talk during this step home to illustrate this i will show how to load the tape in the die computer using a web file you can see here we are specifying the tape playing it okay it will start you can see it was recognized it will last for four minutes but the emulator is accelerated and the video is also accelerated so it will really start quick now we are in we are selecting the keyboard and winning the game and now we can enjoy backman finally now let's come to that question of the long-term preservation of emulator so as i told you emulators are also digital artifacts and they also depend on a guest system so to summarize there are three main approach to manage the long-term evolution the first one is migration again but migration of the mean emulator meaning recompiling to a new guest the second one is to emulate the preview system so it means that we will stack the guest emulator on the host emulator which is really another host and the last one the universal emulation machine is to really isolate from any kind of guest and of course they can be isolated to illustrate the migration approach here you can see we have an initial situation here with 2010 emulator and in order to run on the new hardware platform we will recompile this emulator to be able to use the service offered by this operating system of course it needs a community for the compiling but this is the same community that will also maintain improved emulator it will run at the same with no performance over it so it's interesting for reliability and performance in the second approach we are keeping the previous emulator in change but we are simply emulating the old platform on the new one using another emulator so it will be developed by another team for example it could be win32 emulator on win64 platform it will be transparent but it could have of course performance impact and maybe also some accuracy or reliability impact a third approach is to use a virtual computer for example considering we have our java cpc application running on an old hardware platform if you want to move to a new hardware platform we just have to change the virtual machine layer here and we keep in change the emulator and the application a point is that we have to make sure that this virtual machine is still able to run the same java version that the previous one so there is still some possible issues with the byte code evolution and some backward compatibility that must be ensured a more general approach is that idea of universal machine in that case actually we give really the specification of the virtual machine itself so we make sure that in the future that machine could be re-implemented as necessary so it's really a full specification of the machine but also all the format all the viewer and you make sure that you will be able to run on any guest system but re-implementing virtual machine can really take a lot of effort and it's not really suited for emulators more suited for documents and that approach is actually a bit idealistic to conclude emulators are really great tools for many purposes at technical level for museum but for everyone they provide easy installation to enjoy retro computing retro gaming and so help preserve collective memory i hope you enjoyed that quick journey in digital preservation and emulators my site i really enjoyed preparing this talk and it helped me realize it was not so easy to capture and explain the full picture and maybe it also raised some questions like do we need some way to manage that knowledge base so now well feedback ideas your questions are welcome
Software emulators are wonderful tools to study old computer systems for different purpose from running legacy application to retrogaming. This talk explores the context of digital preservation triggered by on-going work in a Belgian computer museum where emulators help in rediscovering old systems, maintaining/recovering knowledge on their design and sharing the experience with the audience without stressing fragile old machines. This talk aims at exploring and somehow engaging the audience about some simple questions from that perspective: where to look for emulators (MAME/MESS, specific development, javascript ports...) ? How to select one for some usage context ? And last but not least as emulators are themselves part of history: How to make sure/contribute to the sustainability of those nice piece of software on the long run ? The talk will not provide definitive answers to the above questions but document some guidelines gathered so far from different perspectives: usage, architecture, exchange between emulated/physical word, open/closed approaches, community aspects... based on the speaker experience as user of emulators personally and at the NAM-IP computer museum but also as software engineer experienced in evaluting software maintainability, especially open source projects. It will be illustated on a practical case from the museum: the DAI "Imagination Machine" microcomputer. The ultimate long term goal could be to build a knowledge base shared across the community developping and using emulators.
10.5446/53350 (DOI)
Hi everyone, I'm Jim Hall and thanks for attending my talk about working on DOS in 2021 because we are getting closer to Fridos 1.3. In fact, this presentation is basically a history, a current status, and a future of the Fridos operating system. Now let me introduce myself. My name is Jim Hall, as I said, and I've done a number of things at Open Source Software over the years, but one thing you might know me from is the founder and project coordinator of the Fridos project. And I'll provide a little bit of background about myself. So a little bit of brief history is I got my start in computing with the original Apple 2. Actually, my family had a clone of the Apple 2 called the Franklin Ace 1000. It was actually a hardware level clone of the Apple 2 computer. And it was on that computer that my brother and I taught ourselves how to program in the Apple Soft basic programming language. A favorite thing of mine was to anytime I saw a computer display on television or in movies, and this was the late 70s, I would try to replicate it on the Apple 2. And I like to write a lot of games, a lot of simulations. It was kind of a neat thing for me to do. But eventually I kind of wanted to do more. I wanted to do more with basic programming than what I could kind of do on the Apple 2. Well, eventually my family upgraded and we replaced the Apple 2 with the IBM PC. This is the IBM PC 5150 in 1981. And DOS was definitely a different operating system than the Apple 2. Apple 2 had a very simple command line, but DOS was the first thing that I had experienced that allowed you to use the pipe to connect different commands together. And I thought that was kind of a neat thing that DOS could do. The command line is still pretty simple. And in fact, this is a screenshot of the PC DOS 1. And as you can see at the top of the screen, it doesn't even have a CLS command at this point. DOS was very, very simple. We did have a basic environment, basic A. And that's where I did all of my DOS programming. And so I would write programs in basic. Again, writing a lot of little games, little simulations. But basic, it did the job. Now as time went on, Microsoft released MS-DOS 5. And that's really where my interest in DOS really started to soar. I've been a casual DOS user up until that point. But DOS 5 really felt like a complete rewrite. They had done a lot of new stuff in DOS that really added to what DOS could do. And one thing that I liked about DOS 5 was they replaced the old basic A, which was unreplaced by GWBasic, with a new basic system called QBasic. QBasic, if you don't remember what that was. That was an interpreted version of basic. It was actually a simplified version of Microsoft's quick basic compiler. And so yeah, you could do a lot of neat stuff in basic. And I used QBasic in my classes. And so if I need to do some lab analysis in a chemistry class, a physics class, I would need to run some stuff for math class. I would write a little program on our computer at home in QBasic just to run some sort of a simulation. Now actually, around this time, I started to explore other programming languages. And eventually my brother taught me about the C programming language. And it was C where I started to write my own versions of the DOS command line. So DOS had a not bad command line. But I did notice that you could improve it by writing my own commands to maybe replace some DOS commands, maybe to add some new DOS commands that I felt needed to be there. But that was kind of my first experience with DOS, was first with QBasic and then by writing some simple file and command utilities in the C programming language. But of course, I also loved using all of the applications. That was the main reason to run DOS. Throughout college, I used this spreadsheet here. This is the as easy as spreadsheet. This was originally a share, well, this is a shareware program. And if you didn't know what shareware was, shareware meant that you could download a program at no cost. And you could try it usually for a limited amount of time. Like might be a month, or they might have a certain number of times you can run the program. But you really were expected to register the program. It typically wasn't very expensive, usually less than $100, which is way better than paying several hundred dollars for a full application like a word processor or in this case, a spreadsheet. So this is as easy as a shareware spreadsheet, basically similar to Lotus 1, 2, 3. And I used as easy as throughout my college career in the early 1990s. Anytime I needed to do lab analysis for the physics classes I was taking, I was a physics major. If I needed to run a simulation, I would do it in as easy as I lived in this thing. By the way, today you can download this at no cost. It's not open source software, but you can download this at no cost. The TRIIS, the people who made as easy as actually released a code that you can use to unlock the full program. They also have a copy of the manual. There were lots of other great things about DOS that I loved, the DOS shell in DOS 5. MS DOS 5 got a major update. And there was a DOS shell in DOS 4, but it just wasn't that great. And in DOS 5, they really did update it. I especially liked one feature in here that allowed you to swap between different running programs if you activated that on a 386 or better machine. So yeah, lots of neat features in DOS. Also I thought the DOS shell was just simple enough that you could navigate to files and do file operations. And you can see on the bottom, you can make different launchers for different types of programs you ran in your system. And so I loved that. I also loved playing a bunch of games. And so this is Dark Forces. It was one of my favorite games. And in fact, I still sometimes go back and play this great game to play on DOS. I played a lot of other games as well. Played Commander Keen. This is Commander Keen, the first one. And this was basically, you were a little eight-year-old kid and you had built a rocket out of your parts from your house. And you landed, crash landed on Mars. And so you got to wander around and do different things and basically find your way home. And then of course, Tomb Raider was originally a DOS game. And so yeah, all these great games on DOS. And so I loved playing games on DOS when I wasn't doing other work. Now at some point, I also discovered Linux. In 1993, I discovered the Linux operating system. I was looking to upgrade DOS and my machine. I wasn't a big fan of Windows. I'll talk about that in a second. And I asked her online and said, you know, what can I use? Because I discovered Unix in our computer labs at university. And I thought that was kind of neat. Unix has a very similar command line to DOS. You need to relearn some of the commands, but they're basically all there. So for example, the type command on DOS is replaced by cat on Linux. Dur is replaced by LS. But other commands are the same, like CD to change into a directory. So you know, and you could pipe commands from one command to another, except there were more commands to choose from on Unix. And of course, Unix was multi-tasking operating system. And so yeah, I had asked around and said, you know, can I run a version of Unix on my home computer? And eventually somebody said, you might want to try out this new thing called Linux. I actually downloaded a or bought a Linux distribution called SLS, Soft Landing Linux Systems, and it came with this is a screenshot of the next version up. SLS Linux 1.05 came out in 1994 with the Linux kernel 1.0. I actually used SLS, I think it was version 1.03, which had as Linux kernel 0.99, like patch level 11, I think it was. And so even then I could still see that Linux was a very powerful operating system. And I was impressed that people were able to create a free version, an open source software version of Unix to run on PCs. And I was really impressed by that. Of course, there weren't a lot of applications for Linux. And the GUI was, you know, not bad. But you know, again, you kind of didn't have a lot of applications to choose from. I usually used it to run Emacs, which was my kind of my editor of choice, and a couple different shells. And that made it easy to run different things at the same time without having to flip back to the command line. But yeah, it really did open up the power of my PC. And as I said, I was really impressed that people could create a free open source version of Unix and just give the source code away. And you know, I wasn't a big fan of Windows. You know, and I remember seeing articles in 1994, 1993 and 1994, that Microsoft had said, yeah, we're planning to get rid of DOS in the next version of Windows. The next version of Windows will completely eliminate DOS. Well, if you remember what Windows looked like, and this is what Windows looked like in 1994, it was Windows 3.1. It was not that great, right? I mean, it was it was very clunky. It was unstable. It would often crash. It felt really slow. So yeah, and there were some applications for it, but it just it didn't, it wasn't great if you if you didn't have the stability to run those applications. Meanwhile, I could get all over the place with my DOS applications. I could run a word processor. My favorite was WordPerfect. And then later it was GalaxyWrite, which was shareware word processor. And of course, I do all of my lab analysis using as easy as and I completely relied on those DOS applications. And in fact, I had dual booted my computer with MS DOS and Linux. So I'd use Linux to do a lot of my programming, especially for class. But if I needed to run an application like my word processor, like my spreadsheet, I would boot back into MS DOS and I would run those applications there. We didn't have virtual machines like you have now. So I didn't like the fact that Microsoft was saying, yeah, we're going to totally get rid of, you know, DOS and the next version of Windows. The next version of Windows will completely do away with DOS. And I thought, man, if Windows 3.2 or 4.0 looks anything like what 3.1 looks like, I don't want anything to do with it. I would rather run Linux and DOS to keep running my DOS applications. And so in June, June 29 of 1994, I posted this note to Usenet. So back then we used Usenet as basically a discussion board. And so I posted this note to Usenet saying, I'm going to start a new project to replace DOS. Because I'd actually asked a couple of months before this, you know, hey, is anybody, you know, tracking what's going on with Microsoft and Microsoft says they're going to stop doing DOS. And everybody said, yeah, we're seeing that. But then it's like, okay. So is anybody writing a free version of DOS? There's a free, you know, Unix system called Linux. So writing an open source version of DOS shouldn't be that much harder because it's DOS is pretty simple. So people basically responded to that and said, that's a great idea. And you should do that. And so not knowing what I was getting into, I said, okay, sure, I'll go ahead and start a project called at the time we called it PD DOS because I thought it was going to be public domain. I thought that's really what open source and free software really meant. And so we called it PD DOS. I called it PD DOS also because you had MS DOS, we also had PC DOS from IBM and you had DR DOS from Digital Research and so following that two letters and DOS, it's going to be PD DOS, which I thought was kind of cute. But yeah, a couple weeks later, it didn't take long for us to realize that, you know, we weren't actually trying to create public domain stuff. And in fact, we didn't want Microsoft to take what we were writing and incorporate it into their MS DOS. And so we didn't take long for us to adopt the GNU GPL for a lot of what we were writing. So the GNU General Public License, and that meant that we were actually free software. And so we changed the name from PD DOS at that time to Free DOS. So thinking about Free DOS, where did it go from 1994? So we actually had a website pretty early on. I think our first website was sometime in 1995. Here it is in its beautiful colors of yellow on a black background. Further down, you actually get white text on a black background. But yeah, that was the colors that we had. That was very, very early. We were actually one of the first websites up on the Internet. Now working our way up to 1.0, we had a lot of development very, very quickly. We had our Alpha series, starting with Alpha 1, going all the way up to Alpha 6, starting in 1994 in Galangoy of 1997. And you can see there between Alpha 4 and Alpha 5, that was June of 1995. We had actually changed the name again, dropping the hyphen to going to just regular Free DOS. And what was in the Alpha? At this point, we were trying to basically replace MS DOS. And we had a basic versions of the different programs. And we actually did have a working version of DOS. However, it was really hard to install. If you wanted to install Free DOS, you had to do a lot of commands on your own to format the drive. Well, actually to put a partition on it using FDisk and to put an operating system on it using sys. And then you had to copy the files over manually or do an unzip on these different giant zip files. It was a big pain to install Free DOS on your own. But it was a complete version of DOS. We had a DOS kernel. Pat Vellani had reached out to me. And he had written a DOS kernel called DOS C. And that was what we used as our Free DOS kernel. So we had a kernel very, very early on. And Tim Norman had written a command com replacement for us very, very early in 1994. And I had written a bunch of other utilities. And we found a bunch of other programs that would basically replace or actually enhance what MS-DOS could do just by looking at different public domain utilities or these different archive sites that had DOS programs. My favorite one I thought was kind of the neatest thing was this program called Spool. So MS-DOS eventually had a command called print that would allow you to print a command to your dot matrix printer. Dot matrix was pretty much the thing back then. And so that would print things kind of in the background. Well, Spool was that plus a little bit different. It actually would print things in the background, but it would actually wait until you weren't doing stuff on your computer before it would sort of speed up. As you were doing stuff on your computer, as you were typing, you'd notice printing on your dot matrix would slow down a little bit. And if you stopped typing, then the printing would speed up. And so it was a great way to provide what felt like multitasking on DOS on those early FreeDOS versions. So what went from alpha to beta? So in 1998, I said, alpha is great, but it's really tough to install. And so I decided to sit down and write an installer. And I want to say I wrote this over a couple of weekends. And as things turned out, I said, OK, I'm just going to write this quick install program. And it's going to be something that we're going to replace soon anyway. This is kind of just a quick thing to get it done. Well, it turns out we kept that for a long time. That install program got updated a little bit here and there, but it was basically the same install program until I think it was the 1.2 distribution. So beta 1, the reason we went to beta 1 instead of alpha was that was the first time we had an install program on the FreeDOS distribution. It was the first time we really had a proper distribution. I also decided at that time it would be kind of cute to apply code names to everything. And so I was a big Linux user, and I liked that Linux had these different code names on it. And so I named every beta release with a different code name. And so I was about to visit my brother at the end of March, and he lives in Orlando, Florida. And so I called it Orlando. I think Marvin got named because I had gotten an email from my university, University of Wisconsin where it falls, where I had started the FreeDOS project. And they were mentioning among other things, they were retiring the old microvax. And the old microvax had these little terminal modems that were called marvins. And I kind of remembered the Marvin systems. They immortalized it by creating beta 2 with the name Marvin. So every one of these different releases code names has a different story behind it. We decided that maybe the code names had kind of run their course by the time we got to beta 9 release candidate 1. So beta 9 RC1 was the first to not actually have a code name on it. That was July of 2003. We were at this point trying to figure out how we were going to get to 1.0. What really was missing from FreeDOS to get us to 1.0? And there were certainly a number of fixes that needed to happen in the kernel. There were some bugs in some other places. And so we were trying to kind of walk our way up to 1.0. But I think there was also some fear that if we got to 1.0, everybody would leave. And so you'll notice that while we did a lot of work in the Alpha series from Alpha 1 to 6 over just a couple of years, and beta 1 to beta 8 over a couple of years, look at that. It took three years to go from beta 9 release candidate 1 all the way up to FreeDOS 1.0. So we walked ourselves up from beta 9 release candidate 1 to release candidate 2, 3, 4, and 5 before we finally were ready to have a beta 9 out there. And then even then, we weren't quite happy with it. So we released a beta 9 service release 1, SR1, about a couple of months later in November. And then a year after that in 2005, we released beta 9 service release 2. Because we were like, are we really ready for 1.0? Are we ready to call this another beta? We won't have a beta 10, or are we going to have another thing? And so the compromise, we're going to call it beta 9 service release 1. But yeah, finally, eventually, we made a FreeDOS 1.0 in 2006. And so what's the difference between service release 2 and 1.0? I can't even tell you anymore. I think we had finally fixed maybe some bugs in the kernel that people had found annoying. I know that networking and CD-ROM support had been added sometime in the beta 9 series. But yeah, going from beta 9 service release 2 to FreeDOS 1.0 in September 2006, I think it was just a bunch of bug fixes. And we just finally said, I think either we're ready for freeDOS 1.0 or we're not. And so we just went with 1.0. Now since then, development did slow down a little bit. Well, development didn't slow down. A release cycle slowed down. So FreeDOS 1.0 released in September of 2006. It took now six years before we released FreeDOS 1.1 in January of 2012. And both 1.0 and 1.1, as I mentioned before, they were using the same install program just updated that I had released for the beta 1 back in 1998. That was that program, remember, that I thought, well, you know, we're only going to have this for a little bit before someone's going to rewrite something new. Nope, we still kept that all the way up through 2012. And it wasn't until the FreeDOS 1.2 release that we were looking at maybe changing some things and making things a little bit better and easier to use. And I had run out of time to work on the installer. I had always meant to rewrite that installer if nobody else did. And I just never found the time. Well Jerome Chadelle showed up and he had wanted to help with the next FreeDOS distribution. And so he said, you know, what can I do? And I said, well, you know, I'd love to have a new installer. But I also said, I don't know that the next installer really needs to be the way the old installer worked. The old installer was basically a C program that automated a bunch of tasks. It could have been done as a series of batch files on Linux. You would have done it as a series of batch scripts. And I said, you know, it just feels like there's a way to do this. C is as a series of little programs that pop up a, you know, like a dialogue box or something like that. And all that's really missing is a clever way to pass, you know, output from one program and be able to branch off and do some other stuff in other parts of the install program. So that's, you know, when it was a single monolithic C program that I'd written, that was easy to do because you just pop up a dialogue box and then there's an if statement and then you do something else as a giant batch file, which is how Jerome did it. There's a lot of clever stuff that he did in there to make the install program do its thing. And so credit to Jerome for writing that new install program for us. It also came with a new package manager. Up until this point, certainly all of our packages were just zip files. So a package in Fridos is really just a zip file that has a special directory structure in it. But in the end of the day, it's just, it's just a zip file. And so we were able to, you know, unzip these files using just the, you know, info unzip command that we have on Fridos. And so it was pretty easy to do. But you know, we had a little bit of package management in there, but Jerome really added, you know, proper package management with a new program called FDimples. And that allowed you to go in after you've installed and actually use FDimples to install a new package or uninstall a package you didn't want to have on there anymore. And it was really well done. It was a really good job of tracking all these different packages. So yeah, in the end of October of 2016, we released Fridos 1.2 release candidate one. And then about a month later, we had the release candidate two. And then the end of the year, so month after that, we had the full release of 1.2. By the way, if you're curious about those dates, we decided that we were going to use some U.S. holidays. And so Halloween is the 31st of October, and we said, wow, it's kind of scary that we're getting to 1.2. So scary is Halloween. So October 31st is a release for release candidate one. And in the U.S., Thanksgiving is at the end of November. And so that year, it landed on November 24th. And so we said this is giving thanks to all Fridos developers. We're going to release a new release candidate on Thanksgiving Day. And then as a gift for everybody, we're going to release the full version on Christmas. And so that was a pretty fast release cycle going between release candidate one, release candidate two, and then 1.2. I will say things actually have slowed down a bit on Fridos 1.3. We had a release candidate one, and then a release candidate two, and then finally a release candidate three. And right now, the most current version is Fridos 1.3, release candidate three. It's actually a very complete version of Fridos, but we're having some discussion about what's going to be in 1.3, and things have slowed down a little bit. And so we haven't really been in a big rush to get 1.3 out the door. We are planning 1.3 very, very soon, but it's not out yet. In fact, we probably will have a release candidate four before we get to Fridos 1.3. So what will be in 1.3 since we're almost there? So we've had this discussion on the list about what is 1.3, what should 1.3 look like? There had been some discussion about, you know, let's make sure we kind of draw a line in the sand and we'll understand what is Fridos. Well, Fridos is and will remain a 16-bit single user command line DOS. 1.3 is going to follow that definition. We are a 16-bit operating system. We are a single user, and we are going to focus on the command line. There had been some questions, especially from some new developers about, you know, should we add a GUI and should this really become sort of a Windows type of clone? My response has always been, it's interesting that you'd suggest making this into a version of Windows because that's the entire reason I created Fridos is because I didn't like what Windows was doing and what Windows looked like. So yeah, we're not ever going to make, I don't ever see Fridos really becoming like a Windows clone. We do include some graphical user interfaces in Fridos. We have Ozone, Seal, and Opengem. And some of those, well, actually they've all kind of tailed off in terms of their development. The most complete one in there is Opengem. It's a very stable, complete graphical user interface. If you've ever used TOS on the Apple system, that's basically what Opengem looks like. It's also, there was a gem desktop on DR-DOS and that's actually where Opengem kind of got its start was that system. But we also had some discussions around, you know, what should DOS do? You know, should we think about where DOS might have gone if Microsoft didn't stop doing development on DOS and hadn't gone into Windows? And so really if you think about it, you know, what is DOS? And if you were going to imagine sort of projecting forward what Microsoft might have done with DOS after 1995, what would they have done? Well they probably would have supported multitasking because you had a lot of CPUs that were multitasking. You would have supported a flat memory model and you would have supported networking by default as well as some other features. Those would have all been in there. But you know, it wouldn't take very long before the old format of DOS executables really wouldn't support the new DOS. And so you'd have to change at some point. If you're going to follow that thread, at some point you're going to have to change the DOS executable system to be some other kind of format to support all these extra features. And at that point, if you're no longer, those DOS programs are no longer DOS programs, are you really running a DOS? Isn't that the same to provide compatibility then with those older DOS applications? Because DOS is obviously all about compatibility. So if you wanted to run a game or an application that had been written for, let's say, DOS 5 on this newer DOS, you'd have to provide some sort of environment, basically an emulation layer so that that older DOS application could run on the newer system. And so we got to the conversation. It's like, if you're going to do that, if you're going to try and change what DOS is so that it can no longer run these old DOS applications, the only way you can run an old DOS application is through some kind of emulation. Well, you are definitely no longer DOS. You can do that on Linux today. You can absolutely run a Linux system in text mode. That gives you access to networking by default. It gives you access to multitasking, a flat memory model, long file names, lots of different features. And if you wanted to run a 16-bit DOS application, well, then you're going to run a DOS EMU so you can boot a free DOS or you're going to run some sort of other environment so you can run a DOS application. These days, you probably run maybe also a DOS box instead if you just want to run one little application. Yeah, that doesn't make sense. So yeah, as we looked ahead at 1.3, we said compatibility absolutely is key. We are not a DOS anymore if we no longer run DOS programs and if our standard DOS executable is no longer a DOS executable. We also, in 1.3, wanted to focus on open source. Now, Fredos has always been open source software going back all the way to the beginning. And I've always said there's no point in having a free DOS if it's not open source. People can't, not everybody can use it and if you don't have access to the source code, there's no point in having a free DOS if that's not it. It might as well have DR DOS or PC DOS or MS DOS. And so we've always had this focus on open source. But there have been a couple of cases where we've been like, well, do we need to really stick to the open source thing if this is a really great application? We need to include things like that. Those decisions, I will say, have actually come back to Fredos later on. There was, without being specific about the name, there was an assembler that had claimed to be really compatible with Microsoft's assembler and Mazzam. And we included for a while, but it unfortunately did not come with source code. But it did have a license that said this is free software. You can, in terms of its gratis, you can actually use it for any use. And a couple years later, we discovered that that was probably not an independent program. It looked like somebody had taken Microsoft's assembler and done some, maybe some disassembly or something to kind of modify this program just a little bit to actually just basically change the name and change all the references from Microsoft to another name. So yeah, it's things like that that really were unfortunate. We had to take that version of Fredos down. It was fortunately not in the base distribution. So we just took down what's called the full distribution. And that's why I want to make sure that every package that's in Fredos is open source software. That does mean that in 1.3, we're going to be removing some packages. That are not actually open source. One package, for example, is a patch program for Turbo Pascal programs. Now there's no reason to assume this one is some sort of a mislabeled proprietary program, but it does not include source code. But it's been so useful that a lot of Pascal programmers have been asking us, please make sure you include this in Fredos. And because it's got a license, as anybody can download it and use it and distribute it and whatever, we had included it in previous versions, but it's not open source. And so in this next version, I'm making a real push on is it open source software? Now by the way, the other challenge that we have with open source software in Fredos is that Fredos predates the term open source software. And actually DOS by itself, 1981, actually predates the free software foundation and the GNU GPL. So a lot of programs that are out there for DOS that people have, we've included in Fredos or maybe have been updated since then to do other things that are against still using Fredos, they're going to come with licenses that are not GNU GPL. They're not an open source license as defined by the open source initiative. But we are looking at those very carefully to decide if they're open software, open source software, sort of free software type of licenses. And so I want to make sure that everything that we include in 1.3 follows that model. So if there's a license in there that's not a recognized open source software license, I want to make sure that it actually is. And so there's, for example, a lot of programs that are public domain. They've got a statement there that says that, but we've got some other ones that are under some other license just because the GNU GPL didn't exist at that point. We're also, by the way, looking at reducing the complexity of what's in Fredos. We've got a lot of packages that we've added to Fredos over the years. We had this discussion fairly recently that, you know, we've just added a lot of packages. And so as somebody had a need for a program, we added that program. But we never really went back and looked at, is that still the right program to include? Do people really still need that program these days? And most importantly, is somebody still updating that program? Does that program sort of become stale at this point? So looking at, you know, SEAL and Ozone, those two of the graphical user interfaces, they're no longer being updated and haven't been updated for a long time. Are those packages that we should remove? There's some other packages that are in the distribution in 1.2 that we're looking at. Maybe we don't need to include that anymore. Is that really something that people are going to need, that people are going to try and use? And let's also face it, if you need a program that's not included in the Fredos distribution, you know, we include all that stuff, if it's open source, on our archive. And so you'll be able to download it by going through a website. So we're looking at what packages are out there in Fredos 1.2. And as we build 1.3, we're taking a real close look at what things to include and what things not to include. And we're trying to, at the same time, sort of basically streamline the install. With 1.2, you can do a full install, but it doesn't actually install everything. It's just a little bit of confusion that I did a full install, but not everything got installed. I have to use the FDimples package manager to install some other stuff. And so in the new version, it's like, all right, so maybe we'll still have some packages that won't get installed as a full install, but everything from all the packages that do get installed, if there's a package group that gets installed, it should be everything in that package group. And so we should be installing, do a full install based at least on package group. We're still having that discussion about what to install versus what not. So if you're wondering, by the way, partially why 1.3 is taking a little while to come out, it's mostly because we're having those discussions. But we also, because it's taking a while, other development has moved forward. And so we've been talking about other features to include in 1.3. And so one really neat new feature is an updated program that we're thinking about replacing the help system with. So AMB is basically, I think it's called Ancient Machine Book. And so it's basically an ebook reader that allows you to have these ebooks. And they're kind of based on HTML. And there's an open source way to translate those books into this AMB format. It's not in any way proprietary. And it's a great way to replace the help system. Also if you want to read an ebook, there's also some public domain ebooks that you can find on the AMB website. So that's one new program that we're looking at doing in Fridos 1.3. And we're also looking at making Fridos 1.3 have a live CD image. And if you've used the release candidate, you've probably have tested that out. The whole concept being Fridos is not that big. This should not be a big operating system. As Fridos has grown, it kind of feels a little bit more like a Linux system, trying to bring it back a little bit and make it simpler. Do you really need to install Fridos in order to use it? Or what if you just booted the live CD? Could you just use it with that? So that's the goal, is that you should be able to boot the Fridos live CD and just use a basically a preset up version of Fridos. It might not have all the packages available to you because that would be a lot to do in a live CD. But certainly the base version of Fridos should be available to you just by booting the live CD. So that's one of the things that we're also looking to do in Fridos 1.3. And those are sort of the big, I guess we'll say the big features in 1.3. Now what's coming next? After 1.3, we're not done. We're still looking at doing some other work. So we are kind of keeping an eye on the future. And right now there's a development by a group of developers working on a thing called NightDOS. And that's a 32-bit drop-in kernel replacement for the Fridos project. By the way, it's called NightDOS because it runs in protected mode. And in the US, PM really means afternoon or evening. And so it's running at night. So NightDOS. That's why that's PMDOS, basically what that is. So that's a new, one of the major new developments that we're keeping an eye on. They're still working on that. And it's a big task to try and create a new DOS kernel that actually still runs all the 16-bit DOS applications because, again, if it's not going to run 16-bit DOS applications, not really DOS. So it's a big project to make a 32-bit DOS that supports all these new features they're working on and still runs these old 16-bit DOS applications. So yeah, it's taking them a while to do that. But once they get it working, I would look at including that in, let's say, a Fridos 3.0 is basically an alternative kernel. And then once we know that it's stable and it works, then maybe that would become the default kernel for 32-bit systems. And if you install Fridos on a legacy system like a 286 or like an 8088, then you have to have some other kernel, basically called the legacy kernel or the classic kernel to run on those old 16-bit systems. Yeah, we're watching the development of the night DOS kernel. Other things I'd love to see added is an alternative DOS shell. I wrote about this a while back, and this is something that is basically looking at a command-com-like system. It's basically command-com, lots of bunch of stuff. So I would love to see, rather than just doing something like what FordOS did where that was we're going to basically make a version of command-com plus a bunch of cool utilities, it's like, well, why don't we actually make this a significantly different DOS shell? Still command line driven, but let's use instead of a standard batch file thing. Let's actually borrow some stuff from basic. So if anybody out there is looking for how do I contribute to Fridos, I want to write something really cool, this would be a really cool thing to write, is to write an alternative DOS shell. I think that there are ways that you can, and I've written an article about it, that you could do to expand a DOS environment, the DOS batch file environment specifically, and still maintain compatibility, still be able to run those dot-bat files. And if it ran on, let's say, MS-DOS 6, it should still run on this new alternative DOS shell, but if it's got extra commands that don't exist in, let's say, MS-DOS 6, that would allow you to, for example, add integers together to assign an exit value to a variable, which you can already kind of do, but it's like basically how can you branch off into different things? There's a lot of different ways that you can expand on DOS by basically borrowing from basic, so that way people can make DOS even better. So that's what we are looking for, kind of after Fridos 1.3, lots of stuff to do in 2.0. Now I think today that you're going to see most people running Fridos inside a virtual machine, and that's certainly how I do it, but we do know that a lot of people like to keep running Fridos on actual legacy hardware, getting kind of hard to find the original IBM PC or even a 286 these days on AT, but they're out there and people do keep working on them. They're seeing more people that are running Pentium class hardware, maybe even some 486 hardware, and so that's great to have. But I will say that if I were to look at all the people that are downloading and using Fridos today, I would say most of them are probably running Fridos in some kind of virtual machine, and that's actually, even if you're curious, that's actually how I run Fridos. I run Fridos in a virtual machine on Linux called QEMU, and I think that'll continue in the future. It's going to be harder to keep these old hardware up to date and working, and I think you can see more people running Fridos in a virtual machine. Anyway, that's kind of the past, the present, and the future of Fridos. At this point, I'll pause and I will take questions in the chat. So thank you very much.
Throughout the 1980s and into the 1990s, DOS was everywhere. And despite being a 16-bit command line operating system, DOS was actually pretty good for the era. This presentation will look back at DOS in the 1980s and 1990s, and remind us why we started FreeDOS in 1994. We've continued working on FreeDOS since then. We released FreeDOS 1.2 in 2016, and are currently working on FreeDOS 1.3. DOS stopped being a moving target in 1995, but FreeDOS re-imagines what a "modern DOS" could look like in 2021. We'll also look at the current state of FreeDOS (FreeDOS 1.3), and what's coming up next (FreeDOS 2.0).
10.5446/53351 (DOI)
Hello everyone, my name is Stefan Bartzmeier. During the week I work at AFNIC, which is the domain name registry for.fl. But I'm not here to talk about DNS, but to talk about the Clemini system. Clemini is not a perfect fit for the retro computing room, because it's not really retro the system has been designed in the last year. And also it doesn't run on the typical machines that are shown at retro computing rooms. But it has some retro look and feel, a lot of retro ideas, and in a way I think it's not completely wrong to present it here. The name Clemini comes from the American spaceship between Mercury and Apollo. And we'll talk more about this later. So welcome, what is Clemini and why was it necessary to invent something new to distribute information? So Clemini is a new system, but it looks retro and this is why I talk about it in this retro computing room. Clemini is born from dissatisfaction from the web. It's plans to replace the web in some uses, not all of them of course, because there are things that are wrong with the web. First is pervasive surveillance. It's a non-problem with the web. Something that you do is recorded, leaves a trail and can be used and is actively used against you. The second is slowness. Very often loading web pages is too slow. You have to load a lot of other resources. It takes time, sometimes it fails. And once it is loaded, you still have to display it, which requires a lot of complicated computations. And also you have to run the JavaScript, which is often the majority of the content of the web pages. You have very little choice of browsers. Today you have only four, five browsers remaining and only two rendering engines. So it's a limited competition in this field. And there is a strong tendency to emphasize form of a content to add a lot of distraction, auto loading, auto starting video, pop-ups, etc. So when you use the web to access actual content, very often this content is drawn under a lot of useless stuff. So why is it so? Why is the web so long and why so many problems? There are of course several reasons, not all of them for technical nature. For instance, the reasons are also because of political pressures, but marketing. They have also to do with economic reasons, need to, as I will, to gather as much data as possible. But there is also a technical problem in the web. It's a problem of excessive complexity. The web is too complex and has too many features. A lot of features are not always good. It is not, you have to pay a price for these features. For instance, a typical web client, a browser, sends too much information to the server. There is a user agent header, for instance, with all the details about the exact version you run. And these details can help to profile, to fingerprint a specific user. You have the third-party resources. You believe that you connect to one website, but actually you are connecting to dozens of CDN on other servers that can track you because of the requests you make for third-party resources such as video, JavaScript code, etc. And you have also, of course, the infamous cookies. All these features are here for a reason. They are not completely useless. It was not stupid to include them. But in the end, they give a lot of information which turn your browser into an agent not for you, but an agent of the advertising industry. There is also too much code to run, even when the goal is simply to display a static page, you have to run a lot of code on the server and on the client because of JavaScript, which slows down and increases the electric consumption. It's not possible today to develop a browser from scratch because of the complexity, which means that there is a decreased competition in practice when Google decides to add something to the web. They put it in Google Chrome and the rest has to follow because soon it will be adopted. So this lack of competition has practical consequences for the user. And also there are many features in the web that helps to prioritize form of a content. These features are good at the beginning. I mean CSS is a good idea, displaying images is a good idea too, but the sheer amount of features that you have to work with to improve the form make that many, many web authors focus mostly on form and forget completely the content. Even when you want to access simple information, simple facts, you have to deal with websites with a lot of auto loading and auto starting video, you have images, you have pop-ups, etc. What are the solutions to the problem of excessive complexity? One typical solution which is often used by people who come at FOSDEM is to add software to the already bloated browser to block the whole things of the web. For instance, I assume that most people listening to me right now use Adblocker because ads are a big source of waste on the web today. Also most browsers allow you to control cookies, for instance you can reject completely cookies or if you are less radical you can forbid third party cookies for other organizations, which by the way starts an arms race because for instance when browsers started to forbid third party cookies by default, advertisers started to use synium clocking on other tricks to make the browser believe that they are not third party cookies. So it's complicated to try to fight this wall and also there is of course no script to prevent execution of JavaScript because in most websites JavaScript does not bring anything, especially when it's a website about information, when it's a web application it's a bit different. You can even go further and use a lightweight browser, something that will run on low end machines, will be fast and will most important, will allow you to focus on content. Links is a typical example but it may be a bit too harsh. You have also Dilo which is a graphical browser so it's a bit more convenient than links. But the problem of all this approach, blocking the worst content on using a lightweight browser, the problem is that it breaks a lot of websites and when I say it breaks, it doesn't mean that you cannot go, you can go to them and then you have something unreadable, you don't know if something is missing or not, you don't know why your button doesn't work, the thing like that. So it's both really annoying and you cannot know it in advance. You never know if the websites that you are trying to visit will work or not when you have this blocker, so when you have an alternative browser, which means that you will spend a lot of time trying, it doesn't work. Okay, I go in no script, I enable JavaScript, no, I try this, I disable cookies, was it the reason? Yes, no. So in practice, using these solutions make browsing the web a very painful experience unless you always go to two or three sites, always the same. But of course, it's not the typical use of the web. So the Gemini system is addressing the problem of excessive complexity by a very radical approach, a new protocol on a new format. Gemini does not use HTTP, Gemini is not the web. That's the first thing that you have to remember, it's not the web, it's something different, you need new software and new servers, new content, etc. It's not the web, it doesn't use HTTP, it uses its own protocol on a new format, it's not HTML, it's a new format. A protocol is very simple, purposefully not extensible. You cannot add features to the protocol and many things are done to make it difficult for instance, fields are very limited so it will be more difficult to add new features. Of course you have no cookies, you have no headers at all, so no user agent or accept language or other headers that can be used for fingerprinting. So it's the first step to make things predictable. Unlike the web with alternative browser like DLOW, you know in advance that the server will not use this feature and you know that your browser will not send information about you. The new format of the file, text slash Gemini, also called ChemText, is very simple. It's a very small subset of Markdown, actually you will not find a lot of Markdown, remember that Markdown is not standard, so the core original Markdown didn't even have images. In the case of ChemText, you have mostly titles, headers, things like that, you have no pictures and no hyperlinks. Links exist but they have to live on a separate line outside of the text, for good reason it makes pathing much easier. So some people in the web for computing room may think that it's nothing new, it's a Gopher again, we go back to Gopher. Now remember that Chemini was named this way because it was after the Mercury program which was Gopher, and before Apollo which is the web. Chemini tries to be better than Mercury but less expensive and complex than Apollo. So in Chemini we have URL, we have URL which are very convenient, we have mandatory TLS, there is no clear text in Chemini because today with pervasive surveillance you need TLS all the time. On the content is by default in UTF-8 and encoding of Unicode because we cannot this year have an international system without Unicode of course. And the good thing about all these features of Chemini and the lack of features, the lack of complexity is that it can help you to save energy which address one of the big problems of mankind at this time. We don't have just a pandemic of COVID-19, we have also other issues to face. So now it's time for a demo, everyone loves demo. So we are going to use the command line tool Agunur. Agunur is a bit like what Curl and Wget are for the web, it's a simple command line tool to retrieve content from a Chemini server. So let's try. Up here and then run again URL. It simply connects to the Chemini server at the other end and you get content. You have hash marks here who indicate the level of titles, level 2, level 3, etc. You have text, you have links here when it starts with equal, superior, it means a link. This link goes to another Chemini capsule. You can also have links that goes to a web page. On you can have also relative links. The first two are absolute links. You can have also relative links like this one, a link on the same capsule. We can add things to the URL here, a different URL on the same capsule. We get content with a lot of relative links, text, titles, links to web pages. Because the protocol is very simple, you can run it manually with a tool like a GlueTLS-CLI which is a bit like OpenSSL-S Client. It simply establishes TCP connection, then a TLS session and then you can type command to interact with the server. Let's see, I just passed the URL. We can see now, one-way-one GlueTLS-CLI connects, IP address, TLS stuff, certificate, etc. This sort of thing, usual TLS stuff. Then we just send the URL that I copied and passed to the server. We have the reply, 20 is a status code, it means OK. Status code in Chemini are 2 digits to avoid the risk of extensibility, which would be a problem for Chemini. By the way, 51 is not found. If you find 51, it means that the resource does not exist. You have the MIME type and then the content. Here, nothing extraordinary. At the end, because the Chemini server immediately terminated the TLS connection, the GlueTLS-CLI protest on Thursday that it was not properly terminated, but actually it's a normal behavior. There is a discussion currently among the Chemini people about how to properly finish the TLS stuff. Of course, the ordinary user won't see the protocol in action. He or she will see instead what is presented by a Chemini client. There is no CSS in Chemini and nowhere in the GEM text to control the appearance of the page. Everything is decided by the Chemini browser. So we are going to see the same page with three different browsers. First one is Lagrange, which is a graphical one. You can see here the schedule for first-dem on here all the events. You have links. You can click on links, go back, et cetera, et cetera. An ordinary graphical browser. We are now going to switch to Emax-based one because everything is in Emax. It's well known. So Alpha is an Emax mode for Chemini. So you can see now Alpha with the same program, same thing. You have links. You can visit links. You can follow the links. You can go back, et cetera, et cetera. It's convenient for the people who know Emax. Of course, you can have also a browser which are based on courses or similar technology. So for instance, we are going to see Unforra, which is a Chemini browser available on courses. So here you can see Unforra with the link to activate the link. You type that number on here. So it's not really what I would call convenient, user-friendly, but it works on its slide and very fast. So the important thing to remember that with Chemini, when you are the author of a Chemini page, you don't control the appearance. The browser does it. OK. Now that we have seen how to run manually the protocol, the protocol is very simple. And it's one of the important goals of Chemini, that it is easy to write software to avoid concentration on lack of competition. Because when you have only one or two programs to implement a given protocol, the people who control these programs actually control the protocol, can do what they want. So to make it more difficult for one author to influence the protocol, it has to be very simple so you can have a lot of different programs. So because of these goals, we have a lot of Chemini programs. It was a success. Everyone wrote a client on the server. It's a common drug among Chemini people to say that you have more server software than servers because everyone tends to write one. And of course, a lot of OBEists use this to try their favorite programming language. There is a Chemini server in Fortran, one in Prologue, one in Assembly language. But running TLS in Assembly is a bit difficult so they rely on the TLS proxy. Otherwise, it's a cool project. Now it is time to talk about more political things. How is Chemini managed? How decisions are taken? So today, Chemini has a written specification of both protocol and the format, which is, according to the author, almost done. I mean, it should be at one moment declared final. My personal opinion is that there are a lot of things which are still under specified, so it could be that a little more work is necessary. But speaking of my opinion, the governance of the Chemini project is quite simple. There is no Chemini foundation or Chemini consortium. It's clearly to be too early. There is only one founder named SolderPunk. On SolderPunk, decide in the end, which allows to keep Chemini focused and to avoid design by committee because when you have several people working on a project, there is always a risk that each one say, oh, I want this and this. And remember that the goal of Chemini is simplicity. So not adding features, but removing features and avoiding the effect of I want this and I want that. So it's better to have a single point of decision for the project. All of this is good and fine. But maybe at this stage, some people start to think that it's too beautiful. There are probably problems and limits and criticisms about Chemini. Not everything could be perfect. Indeed, of course, the big issue with Chemini is that it's minimum. It's brutalist. Isn't it too brutalist? For instance, there is no cash con for Chemini. You cannot, a server cannot tell a client when the resource will expire and should be refreshed. So it's very difficult for an intermediary or for a browser to know how long you can keep in memory a resource. More difficult, there is a problem of security on Tofu. Chemini use TLS. It's mandatory. You cannot have clear text communication. But TLS, some people believe that it's too difficult to get a certificate and they don't want to depend on certificate authorities. So the Chemini specification says that you can use Tofu, which is trust on first use, which is a mechanism used by SSH. The first time you connect, it's OK. When the case of SSH by default, there is a confirmation which is asked. And then if the key stays the same, it's OK. But if it changes, you have a big frightening warning. The problem is that the Chemini space is very different from SSH because with SSH, you connect only to a few servers that you know. You can talk with the admin. With Chemini or with the web, it's of course not possible. You cannot know everyone. So Tofu is not really, really secure. Also, in my opinion, the specification is currently quite vague about the details of Tofu. So that's probably one of the big problems of Chemini today. There is also, of course, a problem of user acceptance. Will we have users accepting to live without images? Images are useful. Most of the images on the web are useless. There are a rest of time and resources. But not all of them, if you write a document, images can be a powerful means to illustrate it. So you can have images in Chemini, but not embedded images. So it's not really practical. And it's yet to be seen if users will accept it. By the way, one of the reasons why there are no embedded images is that third-party resources are an important way not only to slow down the navigation, but also to track the users. Also from my experience trying to make Chemini content from existing content, if the original content use hyperlink in MacDarone, XML, whatever, it's quite difficult to translate into gem text which does not have hyperlinks at all. So you have to move the links outside of the text and it's not always very easy, I think. I find. On now end of this talk, we are going to talk about retro computing. Is Chemini really suitable for retro computing? Well, it certainly can run on low-end machines. You can run server on the browser on something as small as the Raspberry Pi 1, something that you cannot do with Firefox or Chrome. But it will not run, of course, on a typical 8-bit game console. Typical reasons are TLS and Unicode which take resources and serious libraries. So these two features are sufficient to say that Chemini is not really retro computing. It's more low-tech, slow web, low tech. You can imagine the words you want, but it's the idea that we can run information system. You can distribute information on content without running 10,000 lines of Java for each request. So if you really need retro computing on a small machine, you may still use Gopher. Gopher still works, does not have TLS, does not have Unicode, so it can be used on small machines. But Chemini does not, the goal of Chemini is not to be usable only by OBE-ists. The goal for it is to be really usable by real people today, this year. So that's why TLS, for instance on Unicode, today, we cannot simply have a serious system today without them. So I hope that this talk will give some people the idea of trying Chemini, of contributing. You can contribute software if you want, but to tell the truth, we have a lot of software. The problem today is more to have content on users. And I hope it will happen this year. Thank you and let's talk.
Many people are unhappy with the current state of the Web: pervasive user tracking, a lot of distractions from the actual content, so complicated that it is very hard to develop from scratch a new browser. Why not going back to the future, with a protocol and format focused on lightweight distribution of content? This is Gemini, both a new ultra-simple protocol and a simple format. Not to develop an alternative to YouTube but useful to access content with a minimal client. Gemini is not "retro" but it "looks retro".
10.5446/53353 (DOI)
Hello and welcome to the 2021 Free and Open Source Software Developers European Meeting. Today we'll be talking about an open source project called Raskazi. We'll give you some background on the project, share its current status, and give you a peek at the roadmap for future development efforts. A little bit about myself. My name is Tony Kuker. I've been an Apple user since the 1980s when my fourth grade teacher gave me a basic manual and an Apple 2e program during recess on rainy days. I've been an Apple user ever since. For my day job I'm an embedded systems and software engineer in the avionics domain. I've been working for about 20 years in that area and I've worked on a lot of different embedded systems all the way from safety critical avionics to in-flight entertainment devices. I've also been a fan of the Raspberry Pis since they first appeared on the market. It's a neat, low-cost development platform. And since the pandemic I've picked up working on the Raskazi project. It's a perfect marriage in my interest in vintage max, embedded development, and the Raspberry Pi. So what is Raskazi? Raskazi is an interface board that allows software on the Raspberry Pi to read and write signals on a Raskazi bus. It essentially bit bangs the data out onto the Raskazi bus. Luckily Raskazi isn't extremely timing dependent. It uses request and acknowledge signals. So this allows the Raspberry Pi to communicate very reliably with the other Raskazi devices. Raskazi runs on a standard Linux distribution so it's definitely not real-time. But we've seen some pretty good performance. There are versions of it that run on bare metal on the Raspberry Pi and we won't necessarily talk about those today but they are out there. So before we get into what Raskazi is, we'll talk about what Raskazi is not. It's built upon the standard Linux Raspberry Pi OS from the Raspberry Pi Foundation. Performance is good for older devices. We have to consider that the old Raskazi devices are a lot of them are based on like Z80 or 8052 class processors. So a gigahertz Raspberry Pi can easily do better than that even with just bit banging the data out. The current Raskazi does not do wide Raskazi. We don't have enough GPIO available on the Raspberry Pi currently. We also don't do serial attach Raskazi. This solution is not intended for mission critical use cases. It works great for playing vintage games and getting your vintage computer up and running but it's not intended to be used in mission critical use cases. So Raskazi is not intended to be everything to everyone. Some use cases for Raskazi include replacing vintage mechanical drives on vintage computers such as a hard disk, a CD-ROM, magneto optical drive. It's also able to emulate rare and unique vintage Raskazi devices. An example would be a Raskazi Ethernet interface. For a while these were made for Macintoshes and other devices that had a Raskazi port but no Ethernet network interface. And so the things like the data port Raskazi link, there was a Nuvo link, Sante made a couple versions of it. What these did is they connected through the Ethernet interface and then allowed that PC to talk on the Ethernet network. So the data port is the one that we've been focusing on for the Raskazi. It's also used on the Atari ST with Freeman. Roger Burrows did some great reverse engineering work back in the early 2000s and documented that and really shared that with the community which was great. One of the big benefits of having the Raspberry Pi is you have the full Linux network stack and you have the flexibility of setting up network address translation, doing wireless communication, doing it over Ethernet, doing BLANs, whatever you need to do. Other Scuzzy emulation products out there, they use microcontrollers which are great, they're more deterministic but they don't really provide a good networking stack to network your devices. Another cool Scuzzy device that was made for the Macintoshes was called the Scuzzy Graph. And what the Scuzzy Graph allowed you to do was take a black and white Mac, the old toaster Macs and use an external color display. The unfortunate thing is that Scuzzy Graphs, they're like unicorns. They're rumored to exist but very rare. Some day we hope to find one that we can capture some data on and do some reverse engineering on it so that we can add external display support to some of these older Macs. Additionally, there's a device called a HostBridge that Gimmin's developed for the Sharp X68000. It currently is only available for the Sharp X68000 and it uses a custom driver that he created on the Host but it allows Ethernet connectivity and it gives you the ability to directly access the Linux file system from the Sharp X68000. So where has the Raskaz even used? Gimmin's originally developed it for the Sharp X68000 in Japan. We've taken that in what we're calling the 68k MLA version is kind of the fork of the project for vintage Mac users. We haven't been able to test this with the X68000 but in theory it should work. We do have one report that it does not but we haven't had time to dig into that yet to understand why. But the 68k MLA version we've tested with dozens and dozens of 68000 and PowerPC Macintoshes from the 80s and 90s. We've had good luck running it with the Mac Plus all the way up to a PowerMac G3 and above when Apple started removing the SCSI port from their devices. One note that the Mac Plus was developed before the SCSI spec was finalized and so Raskaz will work with the Mac Plus but you can't boot from it. A lot of users are using it with a Macintosh SE30 which is considered by a lot of people to be the favorite toaster Mac out there so we've had great luck with that. A few folks in the Akai sampler community have taken the Raskazi and had good luck getting it to work there. We've also connected it up to Ubuntu 20 Linux PC just to do some testing and it seems to work great there. We do have a complete list of everything that's tested on the wiki and we try to keep that up to date every time we hear of some other system that's been tested and used with it. We'll get into how does the Raskazi work. One of the basic requirements for SCSI is that each end of the bus must have termination. There's two ways to do that. You can do active or passive termination. Raskazi uses passive and essentially what that is is a 220 ohm pull up resistor to 5 volts and a 330 ohm pull down resistor to ground. This ends up to about 3 volts on the signal line when it's idle. Raskazi is built around the 74 LS641 transceiver from Texas Instrument. This is an open collector or tri-state output depending upon which way the direction pin is set and this is controlled by the Raspberry Pi to set the direction of these transceivers whether they should be inputs or outputs at any given time. When the Raspberry Pi commands these to be outputs, the output will either be a low which means that the transceiver is grounding the signal or it could be an open which means the signal is pulled high by the termination resistors. When the direction is set to input, the Raspberry Pi will just read the inputs off of the Raskazi bus that are driven by another device. One thing to note is that the Raspberry Pi does use 3.3 volt logic for its GPIO where the Raskazi bus is built around 5 volt logic generally. So this kind of requires you to have the transceiver in there to technically meet the specification of how the Raskazi bus is supposed to work. There's a few different versions of Raskazi that are out there. The first and the most simple is a direct link version. This is there's no transceivers, it's just a Raspberry Pi GPIO connected up to a Raskazi port on a host machine. Some people have done this with circuit boards or some people have just put together cables that map the wires as you see here in this picture. It works because the voltages on the line is because of the pull up and pull down resistors is usually 0 volts or 3 volts. So the Raspberry Pi GPIO can handle those voltages. The part where you run into trouble is according to the Raskazi spec when you pull the signal low, you're supposed to be able to sync about 48 milliamps of current. Unfortunately, the Raspberry Pi is only designed to sync about 18 milliamps of current. So it's possible that you do damage your Raspberry Pi if you use this direct link connection method because you're technically using the Raspberry Pi outside of its design limits. That being said, a lot of people have had luck doing it this way, but it's not the preferred way of connecting up Raskazi. There's a few boards out there that are target only. So these do use the 74 LS641 transceivers which are right here. We talked about previously. The thing that makes this different is the direction of a lot of the signals is controlled by the IO signal line from the Raskazi bus. The target only board can only respond to requests. It can't initiate any transactions itself. The full spec version is the most common. It's similar to the target only, you still have your transceivers on there, but the difference is that there's more flexibility on the direction control of the transceivers. What this allows to do is the full spec version can initiate transactions. This allows it to act as a host so it can read drives. The full spec version allows Raskazi to initiate transactions where it can read drives and query the bus and do other activities like that. One key thing to note is that each of these requires a different software build. There's compilation options when you build the software. Raskazi does print that out when you first start up, so it's easy to check that. So there's a lot of different Raskazi emulation solutions out there on the market. We'll talk through a few of these here. Raskazi is what I'm partial to. There's a very energized developer community out there. It's fully open-source. You have the full Linux software stack available to add new features. So you can do a lot of cool things like web interfaces and those sorts of things. One of the sister projects to the Raskazi is the BlueSkazi, which uses the BluePill single board computer. It's a fork of R.Skazeno. BlueSkazi is intended to be a very low-cost, simple solution to replace failing hard drives. Again, it's fully open-source. There's a lot of collaboration with the Raskazi community. The case where BlueSkazi is fantastic is if you need a lot of them, but you don't need a lot of flexibility or a lot of the other bells and whistles, BlueSkazi is definitely the low-cost approach to emulate a Raskazi device. Another one is Raskazi2SD. This is like the grandfather of a lot of these devices. It's proven, it's stable, it's a mature product. It's been around for years. Some of the older versions were open-source, but since Raskazi2SD version 6 was released, that is closed-source. One thing great about it, though, is the Raskazi2SD version 6 has fantastic performance, and we'll see some numbers with that later. But for these older Macintoshes, like if you have a 20 MHz processor, the performance really isn't needed necessarily. Relatively new product that's come about is MacSD. It's another great product. It supports CD audio, which is unique. It has very, very easy setup, great documentation, and on their forums they're constantly adding new features. But again, it is closed-source, which is unfortunate. You can see the prices down below. They're all over the map. Raskazi2SD is in the middle at $45 plus a Raspberry Pi. It's right in the middle. You've got BlueSkazi, which is at the lower end of the price, and then MacSD, which is at the higher end. It really depends on what you're trying to do as far as which one of these is the best to use. So when you get your Raskazi, you can get it in two different ways. You can get it in pre-built or in kit form. When you order the kit from Tindy, the 0402 resistors are already pre-populated on the board, as well as the LEDs, the fuse, the diode. Basically whatever SMT parts JLC, PCB is able to install in their factory. You install those for you. So when you get your kit, you will have to install the through-hole parts, the connectors, and the transceivers, since the transceivers are not a very common or standard part. So one benefit of the kit version is that there's different ways to assemble it. We'll get into those here in a little bit. So you can have either a super compact version of Raskazi or a more flexible version. A shout out to PotatoFi. He has generated a walkthrough or a live stream, the assembly of one of these. And it's out there on YouTube. He did a great job putting that together. So it's definitely helpful if you haven't done a lot of soldering, but you want to tackle doing this. YouTube video will be really helpful. And also the community is there to help get you up and going as well. One of the key tricks with this board is you'll notice that IC2 and IC1 are flipped. The first four boards that were built by four different people, all four of us made the same mistake where we soldered all of the ICs on the same way. But these two need to be reversed. So if you do the kit version, keep that in mind. The other thing about Raskazi, like I mentioned, it's fully open source. So you're welcome to go and get your own PCBs made. I would not recommend soldering the 0402 resistors on there yourself unless you're especially brave. You could, since it's open source, you could take the board design and modify it to use through-hole parts. Or you could just eliminate the resistors completely and not do the termination logic. Pretty much what the resistors are needed for. You would need an external terminator to use that configuration. So as I mentioned, there's a couple different assembly options. The standard configuration is where you have your Raspberry Pi on the bottom. And then you have your 40-pin header. And then the Raskazi sits on top of that. Then there's also an optional daisy chain board that can sit on top of that, which gives you an in and an out for daisy-chaining SCSI devices together. A compact configuration can be done using the Pi Zero if it doesn't have the headers already soldered on. So what you can do is you can put the Pi Zero on top of the Raskazi. The downside to this is that you can't access the HDMI or the power connectors on the Raspberry Pi Zero. But we do provide a power connector on the Raskazi so that you can use that to power the device through the, it'll back power the Raspberry Pi through the GPIO header. Most of the boards that are out there are the standard boards. If you order a preassembled one right now, they'll all be the standard configuration. But if you do a kit, there isn't really awesome 3D printable case out there. It'll only work with the compact configuration, though there's rumors that there might be a standard configuration version coming. You can get this case on Etsy or someday soon it will be available where you can just download the model and 3D print it yourself. So when you get your Raskazi board, you'll see that there's a lot of stuff crammed onto a small board. At the top here, you have your Raspberry Pi header. So this is where you'll connect to the Raspberry Pi. It's a standard 40-pin GPIO pinout. In theory, it should work with other do-it-yourself type boards that have the same pinout, but we haven't tested that. We've got a header right here. It brings on one of the cheap OLED displays from Amazon or wherever. And it'll show you the status of what drives are mounted and that sort of thing. We've got a 50-pin ribbon connector here. You can use this either to hook up to the ribbon cable inside your PC, or you can also plug in the daisy chain board into this header, and that'll allow you to daisy chain scuzzy devices. And then down here towards the bottom, we have a 25-pin DB connector, which is the standard scuzzy connector on most Macintoshes out there. Coming soon will be a version that supports PowerBooks as well, so that'll have the high-density scuzzy connector down here at the bottom. So I'll keep an eye out for that, and it should be coming. Up here is the terminator switch. So this allows you to turn on and off the passive termination on this board. When you change these switches, you have to change both of them. Make sure that they're both on or both off, or you might get some odd behavior. As I mentioned, there's a USB connector here, so if you're using the compact configuration, you can use this USB connector to power the Raspberry Pi and the RAS Guzzi board. And then lastly, we've got a header here that can be used to drive an external activity LED. So once you get your RAS Guzzi and you have it all set up and ready to use it, there's a couple different ways you can control and add and remove drives. The first one, and probably the easiest, is the web interface. So if you install this web interface, you can use pretty much any web browser on your network to add or remove devices. It'll work with newer browsers, but then it also works with Netscape 4. So Netscape 4 is available for a lot of the older vintage PCs out there. So it's really unique and nice because you can control exactly what's loaded in your virtual devices, like a CD-ROM drive. You can control that from within the vintage PC itself. So if you want to swap disks, you can just open up the web interface and click eject and then insert the next ISO that you want to use if you're playing games. There's also some cool features like you can download files directly from different abandoned ware sites like Macintosh Garden and Macintosh Repository. It'll download them and then create a virtual CD or an ISO image that can be mounted so that you can get the file on the Macintosh. You can also download the disk images directly from this interface. You can upload new ones. It's really quite flexible. One key thing to consider is that there's no password. There's really no network security type protections in place. So anyone on your network can access this. So we definitely recommend you only use it on a closed private network. Raz Control is the command line utility shown here on the right. That provides the same functionality as the web interface and then some more. Behind the scenes, the web interface is actually using the Raz Control utility. The nice thing about Raz Control is it doesn't require root access where the RASCSI service does. So anyone on the Linux Raspberry Pi host can change the configuration of RASCSI. So when you run RASC Control, it opens the socket to the RASCSI service and then sends whatever command you gave it and then waits for a response from RASCSI. And again, there's no network security protection on this. So anybody on your network could use this interface to reconfigure your RASCSI service. So definitely recommended only for closed networks. Here's some benchmarks showing how the RASCSI stacks up against some of the other drives out there in some of these systems. This test was done on a Macintosh Quadra 840 AV, which is one of the last 68K Macintoshes that Apple made. So it had a 68040 processor running at 40 MHz, 128 megs of RAM, yada, yada, yada. And Norton System Info 3.5 was used to gather these ratings. So the interesting thing is that with RASCSI and using a Pi 4, the benchmark is up around like a Power Mac 9600 level. Compare that to the Seagate drive that shipped with the unit, which I will score down here of 124. Another interesting thing too is comparing the different Raspberry Pi models. The Raspberry 4 is obviously the best, but the Raspberry Pi Zero, for the price of $5, the performance is still pretty good. If you're running something like an SE30 or a Mac Plus or one of these older machines, a Raspberry Pi Zero is 20. It's not going to be the bottleneck. I do have the SCSI to SD on here. This was version 5 of the SCSI to SD, which is not the highest performance one. But it kind of gives you an idea where that lies. One of our users did a test using the SCSI to SD version 6. Null Eric was the user who did it, and he recorded a score of 247 on a Power Mac G3 with the SCSI SD version 6. So that's above anything that is shown here. The higher score was influenced definitely by it being a faster computer, but then the SCSI to SD version 6 performance is really quite good. First and foremost, I have to give all the credit for the hard work to Gehman's. He was the original architect of the project. He's the brains behind this project. It was published in 2017, potentially earlier on his personal site on the web. He generously made his schematic source code available on his website, but he attended it for the SharpX68000 computer. This is where he developed his host bridge functionality where the SharpX68000 can access the Raspberry Pi's file system directly, and it can also access the an Ethernet network through the Raspberry Pi. Also in 2017, the 68K MLA using K55 started a thread about this Raskazi project and started a discussion among the group about could we use this on the Mac? What could we do with it? He took the initial stab and built a couple boards, proved out the technology, and proved that it could work on Vintage Max. Again, he shared his schematics and shared his designs on the forum, which was very helpful. In 2020, with the global pandemic, I jumped into the project with all the free time that we have. In 2020, I laid out a new board with both a DB25 and a 50 pin header. It gives us some more flexibility, modified the size so that the footprint is about the same as the Raspberry Pi. Developed the Daisy Chain daughter board so that you could easily add the Raskazi in the middle of your Raskazi chain. I forked the code on GitHub. Gimmons had released the source code on his website, but it wasn't an actual collaborative development environment that I'm aware of anyway. The 68K MLA version of it is based on Gimmons version 1.47. Once the code was checked in the GitHub, I spent a lot of nights with Google Translate going through and translating all the comments to English, since I do not speak Japanese. Also worked with a couple great members of the community, Frax, Noel Eric, and myself. We worked on pulling together a wiki page on GitHub so that we have a place to document a lot of the functionality and help new users get started a little easier. I pulled together a Python script that interfaces with OLED that can be attached to the Raskazi and it will show you which devices are currently installed with Raskazi. Noel Eric did a great job pulling together a Python based Raskazi web control interface, which is the one that we showed earlier. Appreciate all of his hard work. So, Noel Eric and I created an install script that you can run so you can get a new Raskazi installation up and running in just a few minutes. In October, Joshua Stein did a really cool proof of concept where he was screen mirroring his classic Mac through the Raskazi into the frame buffer of the Raspberry Pi. That was really impressive. Hopefully we'll merge that into the main Raskazi tree sometime soon. In January of 2021, we released some beta Skazi Ethernet functionality. It emulates the Daneport Skazi link. It's still in beta form, still for early adopters, but we're working on that right now. So that's where we're at today. Going forward in 2021, we're working to clean up the Ethernet functionality and get that into the main production build. We're working to get a PowerBook compatible hardware version that you can put inside your PowerBook. There's a bug out there where you have to use a patched Apple CD-ROM driver. We're going to work to get it so you can use the stock Apple CD-ROM driver. I'm going to hopefully add a zip drive emulation. I'm a big fan of my zip drive. Hopefully get that implemented. Support some additional disk image formats. There's the.toast files, which are pretty common on Macs. Then there's a proprietary Apple image file format that we're hopefully going to support in Raskazi here in the next year or so. Additionally we're going to work on some configuration and logging improvements so that you can save your current configuration and reload it after a reboot. And additionally we're hoping to move to the GitFlow branching model and establish some consistent versioning numbering and get a release cadence going. Right now everything is still tagged as version 1.47, which is what we branched from GitMins with. So one of the neat tools that came out of the development of the Skazi Ethernet functionality is a tool called SkaziMon. So as part of this development it was necessary for me to capture the signals on the bus so that I could analyze and debug them. But the boss at home wasn't a fan of me purchasing a logic analyzer. So I sort of built my own using the Raskazi. I couldn't get a logic analyzer, but I could have two Raskazes. So essentially what it does is when you're running the SkaziMon tool it'll take the Raskazi hardware and put it in a listening only mode. And while it's running it'll spin in a very tight loop capturing the data from the Skazi bus and saving it into memory. Once you stop the tool it'll dump the data into a value change dump file or a VCD file that you can open up with GTK Wave. GTK Wave is an awesome tool shown here. It's great for decoding the data and viewing the sequence and seeing what's going on on the bus. It took a few iterations but we got the SkaziMon software down to about where it's taking a sample every 80 nanoseconds or so. So it's not perfect. It can miss some signal transitions. But using a Raspberry Pi 4 and that 80 nanosecond sample rate you get a pretty good idea of what's going on on the Skazi bus. Another challenge that we had when we were working on the Ethernet Skazi functionality was isolating the traffic. So when you boot from a Skazi drive there's a lot of traffic on the bus from that drive just from the normal operation of your PC. So if you think about it when you try to do a ping from the Mac it's got to access the hard drive a couple times to actually do that operation. So the solution we came up with is using some really cool tools put together by Big Mess of Wires. We're using his floppy emu device. So in addition to emulating a floppy it can also emulate an HD20 hard disk. In the 80s Apple made a hard disk that connected through the floppy port. And so that's what this floppy emu does. I did most of my development on an SE30 but the problem with the SE30 is it doesn't support the HD20 connected over the floppy bus. Big Mess of Wires has a device called a rominator. You can replace the stock ROM that comes with your Macintosh with this rominator and it adds some additional functionality and additional flexibility. One of those things is that the HD20 functionality will work over the floppy port. So this was an awesome solution that I could segregate all of the drive traffic on a different bus and when I was using SCSImon I can just look at either the traffic from the SCSI to Ethernet device or another RASSCSI if that's what's on the bus. I'm going to throw out another thank you to Potato5 for loaning out his EtherMAC SCSI device which is just a rebranded Danaport SCSI link. I additionally want to thank Roger Burrows. He documented a lot of how the Danaport SCSI worked in the early to mid 2000s as he was developing the driver for the Atari ST, the FreeMint OS. He did a great job but he wrote the documentation from the perspective of the driver. So it was missing some key pieces of information that were needed to implement the emulation of the device itself. Between monitoring the real device and Roger Burrows information everything kind of came together and we were able to get some SCSI Ethernet functionality. Finally, how can you get involved if you're interested in getting a RASSCSI or just being part of the community? You're welcome to build your own. The schematics, the Gerber files, the build materials, everything you need is available on GitHub so you're welcome to grab that and use that. You can order one from Tindy. You're able to get them preassembled or in a kit form. I do international shipping to most countries. If your country is missing just send me a message and usually I can add it pretty quickly but one of the key points of this is it's not intended to be a money making venture for me. It's intended to get RASSCSI into the hands of as many people as possible so that we can keep the excitement going on this project. So you won't hurt my feelings if you want to build your own but both options are out there. Love to have as many of you as possible. Join us on Discord. We have a great community of developers and users. We have some good discussions on there. Also you're welcome to join our discussion on the 68K Mac Liberation Army website, the 68K MLA group. A lot of good discussion that's going on there. And also we'd love to have you try it out on your vintage hardware. If you have things like a next station or a spark station or some of these other older devices that have SCSI, we'd love to have you try RASSCSI on these devices and let us know how it works. As an example, last night we just had someone try an Apple II and their Apple IIe was able to initialize the drive using RASSCSI which was kind of exciting. So to wrap up, I'd like to say thank you to everyone who's attended this meeting but also all the contributors to the RASSCSI project, everybody who's helped in lots of different ways whether it be updating documentation, providing source code or just trying things out. So in closing, I hope to see you in our Discord channel or hope to see you get involved with RASSCSI in some way. Don't hesitate to reach out to me if you have questions and I'll be hanging around to answer questions on the presentation for a while. Thank you everyone. Thank you everyone who's attended this meeting but also all the contributors to the RASSCSI project. Thank you and you are welcome. Bye. Bye. Bye. Bye. Bye. Cuz he's gone! There's always a little delay, I guess, watching on a parallel screen there to see when we actually go live, really. It should be something like that, yes. Yes, thank you, Tony. For this nice talk, I see we have quite some series of questions there. So I have been asked or tried to be asked by my other attendees, but maybe it would be good to hear your answers on these. If I look at the first one, in theory, it should work with whatever machine using SCSI. Yes, that should be the case. There's some of the very older machines that were developed before SCSI standard was finalized. So like an example would be the Macintosh Plus. It works, but you can't boot from the drive just because it doesn't completely follow the SCSI standard. But yes, in theory, it should work on any device that uses SCSI. Have you been trying on many different devices? Yes, we've tried on a lot of different Macintoshes, the AK sampler, 68,000 from Sharp and a couple of PCs, so it's been used quite a few different places. Good, I go to the next one. Question you received via Discord, right? How do you get around the issue of the non-RTUS Linux? Any time the pie goes off and there's something is going to stretch your time because of GTIO. Yes, so there's a couple solutions to this. The first one, there is a bare metal version of Raskazi that Gimmons created. Currently, it's not supported in my fork of the code, but you could certainly go back and run his version on bare metal. The other part of it too, as I mentioned in the talk, the Raspberry Pi is just so much faster than what it's connected to that if it gets delayed a little bit, it just slows down the read or the write operation a little bit. It's not a huge impact to the target system. Most of the time, it hasn't been a problem. That's why I mentioned early in the presentation, it's not permission critical systems. I wouldn't hook it up to anything in a hospital or anything like that. That's what the question actually, I was asking as well. I was just thinking, I saw that you made this comparison, right, with the different versions available. At least for how this, it seems to be ticked for all of these versions. I was wondering now that there are some installations where critical environments are still using for devices. Is it, do you have any knowledge of this being used somewhere in these conditions now? I'm not aware of RAS SCSI being used in any critical conditions. The SCSI to SD device that's used in a lot of different places. That one potentially, it might be used in a safety critical application. I'm not sure though. I go to the next one, do you see possible convergence of cooperation with projects like Cosmo X, which does the same kind of things, but for the Atari ST computers? As SCSI and SCSI versus. Yeah, prior to last night, I had never heard of the Cosmos EX project. I tried to read up a little bit last night. There may be some potential for conversions. It looks like the Cosmos EX does a lot more stuff than RAS SCSI does. One of the goals with RAS SCSI is to keep it simple and inexpensive. But definitely something we can look at in the future as far as merging with other projects out there. I see a remark of my microphone is quite low, so I will get closer to my computer. So you will see my face. I hope this will solve the issue. I see this other question is, what's 50 pin fast SCSI enough? Or fast SCSI only? Yeah, and it is not. It's only the SCSI 1, SCSI 2, the non-fast and the non-wide version. That's the only version that's supported currently. There's a limited number of GPIO out of the Raspberry Pi. So that's kind of where we're sitting. Okay, I see new questions of pin, what is bandwidth in megabytes per second? A lot of it depends on what you're connected to. Typically, the host PC is the limiting factor. I've seen over a megabyte per second, but I don't typically run it on very fast machines. So it's possible that people might have experienced faster. I see another, I don't know if it's really a question, it's more like a comment. It's a Facebook. It has 268 CPU, SIL Reverse Engineer and Rode New Software to make the system run again. Is this weather star something you've been adding? Technite is in our Discord channel. I think I've seen him talk about it a couple times before. It's a pretty cool project. I think he brought it up as a potential future topic for this forum. I see another question, but not popping up on, oh, it's just popped up now. Have you considered using GPIO expanders? Would CPU be fast enough for getting more bandwidth? I have considered it. I haven't considered it very seriously because I guess my target use case is for old computers that don't really need that high performance. It might actually slow things down if we had to use a GPIO expander instead of directly using the GPIO of the Raspberry Pi. I'm just not a question, but I'm expecting to see popping up here. There it is. How does QZ, not the bare metal build, can be powered off safely? Right now we don't really have a good way to do that. I've just pulled the plug out hundreds of times and haven't had any issues, which there's better ways to do that. That would be a good feature that we should add sometime in the future. Right now you just have to SSH or open a terminal to the Raspberry Pi and shut it down that way. To just plug it off, and then next time you'll just have to be happy that it works again. I don't see more questions coming, but I have one actually. Would you consider any other alternative to the Raspberry OS? Would that bring any added value or maybe new packages or functionalities that you might have considering, I don't know, if you can install BSD or make a Linux from scratch or something like that? Yeah, one of my goals is to use something like build root or yachto to build a very, very stripped down version of Linux so that one we can get the boot up time faster and strip out a little of the unneeded functionality in there. That's something on our roadmap to go and look at. In theory, it should work. We've had people use Raskazi on other Linux distributions and it doesn't really need any fancy libraries or anything. It just connects to the GPIO and the file system and that's about it. I mean, when you showed the history there, we could see that it was something I already started 2016, 2017 and then indeed thanks to the lockdown and the pandemic. There was a huge achievement being done in 2020 and now you have this plan for 2021. Have you seen also more people getting involved in this project or is it only you and the couple of guys that you mentioned that are working on that? Do you see interest also coming from all the groups? Yeah, the sampler, I've seen a lot of people from the AK samplers that have expressed interest. As I mentioned in the chat, we did have a sharp X68000 user who got it to work and trying to remember if there were any other ones. I think somebody tried a next station. It's growing beyond just Macintosh users for sure. Okay, good. Thanks. Looking at these other window if there are other questions popping in. I don't see any. Okay. You mentioned maybe some, of course you did already, you had some links there but some resources or some groups. I think you mentioned that there is also a discord group for people being interested and willing to know more or to dig deeper into the subject. Yeah, I was just looking here and we've got about 79 people in our discord channel. So yeah, definitely come and join the conversation. There's people from all over the world so there's usually someone on there to help answer questions or if you just want to chat. You can also get to all the resources if you just go to rascazzi.com that just redirects to my GitHub wiki. Also everything is available through that. Maybe after you can also put these links here in the chat. Yep, definitely. I see a new question coming. Any USBs can I support to SCSI emulation? It would be possible. We haven't looked at it yet. I'm not sure that there is a question for you. It's more comment about blue for plastics. Okay, I see the link that you have put there. I think there might have been a question much earlier in the presentation about which ones of the SCSI emulation pieces or tools are open source and which ones are not. So Rascazzi is completely open source. The blue SCSI is completely open source. SCSI to SD, the older versions were open source so version 5 and earlier. Version 6 is closed and then the MAC SD is closed source as well. Any last minute questions? I think we can of course stay tuned because next comes the talk about Gemini. I don't know if I pronounced that correctly, a modern protocol that looks retro. And that will be presented by Stefan Bortzmeier who works at AFNIC, the domain name registry for France. And this guy has been talking here at FOSDEM several times before. He even wrote books about the relationship between internet infrastructures and politics and there is a nice website for that. We can find all that on the page showing or presenting or introducing let's say his talk and that will be in about seven minutes from now. We can take it again Tony and take it soon. Thank you.
The talk will be over the current status of the 68kmla fork of the RaSCSI project. To start off, I'll go over what the project is, and is NOT. I'll go over the history of the project, what we've been up to over the past year and what's planned for the next year. I'll also go over some technical details of how it works, how the software is structured. Initial outline: - What is RaSCSI? - What use cases is RaSCSI trying to fill? - Where has it successfully been used? - What use cases is RaSCSI NOT trying to fill? (Hint - high performance SCSI applications) - Comparison to other SCSI emulator devices: Original RaSCSI project, MacSD, SCSI2SD, etc. - Other communities that are using RaSCSI - History
10.5446/53354 (DOI)
Bonjour les gars, j'espère que vous êtes confortables. Dans mon parler, je vais vous montrer comment reviver tout le listing dans l'émulator, en ce cas, l'émulator de Mame. Ma petite application, c'est Java Base, c'est appelé scan to run. Et vous pouvez voir le but ici, c'est de commencer par tout le listing, c'est un programme basé, d'autres magazines de la France, d'autres magazines de l'80s, et d'avoir une vente en emulator afin de voir le programme de renouvelage et d'understand ce que ces magazines ont vraiment fait. Donc le contexte est en fait le bénévolant de volunteer à la MAME Computer Museum, qui est located dans la MAME, donc dans le Béla, vous pouvez vous donner des visites à la MAME. J'espère que la prochaine année, nous serons en retard à la ULB et que nous pouvons organiser quelque chose là-bas. Donc, bien sûr, la MAME Museum, la mission, c'est de préserver l'héritage digital pour acquérir des nouveaux artefacts, les computers, pour organiser l'exhibition. Nous n'avons pas de l'exhibition de l'Héritage Microcomputère, donc c'est aussi pourquoi je suis intéressé en revivant ce que nous faisons pour ces computers, et en faisant des ressources, donc c'est une partie des ressources très utilisées que nous faisons. Vous pouvez voir que la MAME Museum est un grand container, c'est un tout sport, tout, et oui, l'intérieur est vraiment structureux comme container parce que aussi, containers qui ont historiquement développé, en parallèle à les computers. Donc, nous avons une préservation digitale, et bien sûr, l'une des constraintes qu'on a dans la MAME Museum est que nous ne pouvons pas facilement utiliser une machine, c'est vraiment un petit peu. Bien sûr, le tour du travail est de utiliser des émulators, et des émulators très vénus, surtout pour la machine de la MAME, qui est assez vénus, et pourtant, la MAME émulate la possibilité de l'émuler. Et l'autre bout, c'est pour les supports, nous avons des très grandes émulators, des flots, et une autre source, et c'est mon intérêt ici, de regarder à quoi est disponible dans les magazines. Donc, nous avons seulement des papiers, donc ça signifie le besoin de scanner ces choses, et d'injecter elles dans l'émulator afin d'engranger elles. Donc, mon tour sera en train de faire deux études. Nous allons commencer avec les études de scan. Alors, vous voyez que la guerre a faim, elles sont rétendues avec un fonds de dotmatrix, donc elles sont généralement de la résolution, et peuvent être subies à des issues d'égalité. La liste n'est pas un texte, donc la liste typique sera une mélange de numéros et des keywords. Ce n'est pas très bien connu à l'acognition de statistiques, d'exception de truie et de spécifique corpore, ce n'est pas ce que nous voulons faire ici. Et bien sûr, le point de vue est que nous voulons réutiliser, donc on veut, si on peut apprendre quelque chose, pouvoir réutiliser parce que la même chose est souvent utilisé beaucoup de temps dans la même magasine. Et la approche que nous avons sélectée, bien sûr, on veut une source à l'open, et nous ne voulons pas une approche automatique, nous voulons avoir un programmeur dans le loo. Nous voulons pouvoir donner de l'input nécessaire et faire surement que la phase de la formation de la formation est bien en train de se faire bien. Et à un moment, nous avons atteint la phase de la confirmation où il y a très peu d'erreurs à la gauche. Nous avons aussi besoin d'une très simple approche, de la simple, de la stupide, au moins pour maintenant. Donc, après tester un peu de la approche de la carte en black box, comme la Tesseract OSIER, nous réaliserait qu'elle était assez difficile de master et nous avons went for un très simple solution basé sur Java OSIER, donc un très petit code base qui peut vraiment master complètement. Et bien sûr, on peut le mettre en plus tard. La organisation de Java OSIER est très simple. Le document est first of all broken into rows and character or spaces. And then each character is compared against a training set and the best match is selected using a mean square error. The problem is that you have to input all those characters before starting the scan. And of course, we don't want that. We really want to learn the character and discover them at the start of the scanning. Our scan to run application will build that training set dynamically. In this screenshot here you can see our user interface. This area shows the image. This is the panel where we ask specific questions about characters to recognize. The listing that has been recognized will be dumped here and on top of it you can see a few characters that have been recognized so far as pop-up window. And it represents the training until that character. And you can see we have already a few letters and a few numbers. What's important is that in the first characters we will acquire a high match to make sure that we will have no confusion between characters and so we will adapt the threshold dynamically. And later on, if there are two close match for a character we will systematically ask for confirmation. Now a short word about the design of our application. So it's based on the Java OCR design which is shown on the right part of this picture here. On a un basic demo using a scanner implementing visitors with a number of methods that are able to manage documents, rows and characters. And our extension is this part with the learning user interface, the image area, the exterior, interaction area. And we have a big observer here that is connected to the Java OCR legacy. It works this way. First of all we will be able to start the scanning that will be actually forwarded to the document scanner. Once the document scanner is scanning some special character he doesn't know but he will require some assistance from the interaction area. Of course he will also display all the information and the caret. And this feedback will be sent back to the scanner that will be able to update the training set. This will process until the whole document is finished and then the scanning will be start. Before that. The scanner will have to recognize everything and eyes often at the starting part. So let's start. And here we go. So this is one you can see. It's being highlighted by the red rectangle. And being asked quite systematically at the beginning. As you see. On can see it's already recognizing some character. I could show you the internal database but due to the scanning and the software for capturing the screen I'm not able to show you an extra top of window. So you can trust me. I'm wishing that collection of Ascii characters You can see here being asked for all those minor cases. And you see the question is what is the character it's really asking for unknown characters. Here you can see it's asking for confirmation because there was a previous A but it's not quite sure it's an A so it's asking to avoid any ambiguity. And now I'm progressing. So this is an L. I can also check on the screen to make sure. And you can see the 8 is being confused for also the question is quite relevant to ask. And actually when you see the character you just have to type what you see. It's the best way to progress quickly with this. And that's what I'm doing. I think I will try to cover the war listing because it's not so long. This is a plus, plus, plus. You can see punctuation, it's a bit difficult to recognize. It's no limitation due to the small size of those characters. So code could be improved for this. So now we are a bit let me zoom out. We are about you can see here confusion, two characters that have not been separated so I can type both of them. On a un problème. Oh, on a un problème. On a un double code qui ne reconnaît pas. Je vais fixer ça directement. Ok. Et le scan est terminé. On a un petit look. On va voir. Le temps est pas correct. Mais il n'y a pas un nid. Confusion. On va vérifier la phase d'injection. On peut sauver ça et on va au next step. On peut vous demander ce que sont les performances et il y a deux métriques. Les ratios d'interaction, le nombre de personnages, les numéros de characters, et le nombre d'erreurs. Ça dépend de la qualité de scan. Vraiment, on utilise plus de 300 dpi. Et bien sûr, l'interaction est utile pour les artefacts. Mais aussi, pour exemple, pour exécuter les contrastes. Il y a des processus internals, mais c'est limité. Il est bien. Sur les requests typiques, généralement, je ai des questions de medium qualité, des requests de 10% qui sont très courtes. Si vous avez déjà appris avant, il sera moins. Si c'est de la qualité, après 1 page, il sera plus de 3%, peut-être Thank you?Dé caveat, Mais tercerNOH pour son numéro des Filter. D'après les bas, ce déjà peut être très faible. plotings quand il faut bien sûr. Vous pourrez obtenir des erreurs, des syntaxes, quand vous injectez. Donc vous pouvez réaliser ça, ou à un moment donné, c'est plus difficile. Donc c'est mieux d'avoir une sorte d'inspection. Et quand vous vous voyez qu'il y a un problème, c'est probablement assez systématique que la confusion soit répétée, donc vous pouvez essayer de correcter ou d'adjuster des trèches. Et nous commençons le processus pour faire surement que vous obtenez vraiment un rate de erreur lower pour votre recognition. Il y a aussi des limitations au travail. Comme vous avez vu, le trait de caractère peut avoir des caractères, ce n'est pas vraiment un problème. Il y a des problèmes de punctuation, donc le fixe est probablement aussi pour attaquer, pour normaliser les trèches pour ces caractères. Et donc, des problèmes de confusions, de merges entre les rôles, qui ont besoin d'adjustement internes, mais qui n'ont pas de valeur, pour maintenant, il y a plus de scan. Donc c'est un peu plus élevé pour le moment, c'est encore un prototype ici. Mais un résultat assez intéressant. Actuellement, on peut voir que c'est plus rapide que la programmation. Le deuxième étape est le tour de règle. Ce que nous aimerions faire, c'est d'injecter un programme en train d'adjarder un emulator. Certains emulators ont déjà un bon nombre de fonctionnalités, donc ce n'est pas un problème, mais les emulators sont spécifiques pour la machine, parce qu'ils savent comment inclurent l'input directement dans l'interprète. Dans les cas de mode général, comme Mame emulator, tout le port de l'Io, c'est un Io keyboard, donc on peut essayer d'injecter comme vous le faites quand vous typez. Et Mame a une bonne fonctionnalité, une possibilité de scripting, avec l'anglais de l'UEL, qui apporte beaucoup de fonctionnel, comme mettre un éκε Büter ou sa service Ж callingurs, DID-Dad, et les certificats circul��vant. On громidise le complexe de la multiplication du code. le file de thread pour les événements qui sont les logiques que nous devons faire. Et c'est exactement ce que nous pouvons faire avec une solution très très basée. Ces deux lignes sont suffisamment suffisantes. Donc vous pouvez juste ouvrir le file, break-débouiller les deux lignes et poser les lignes dans le emulator. Ça va travailler, au moins avec quelques emulators que j'ai essayé. Avec quelques problèmes, je vous le montre. Et en fait, il y a un problème plus élaboratif. J'ai développé une chose qui n'était pas vraiment un succès. Mais juste pour l'émulator qui peut seulement accéder à quelques personnages, ou à un personnage à chaque bruit de l'envers. C'est un peu possible de l'envers. Ce autre design ne sera que d'un personnage à chaque bruit. Je vais vous le remettre. Ok, bienvenue maintenant à la partie d'injection de la démonstration. Le point de départ est le distinctement que nous avons juste obtenu de la première étape. Vous pouvez voir que l'air n'est pas très important. Mais le point du point est de faire surement que nous sommes dans un sens stabilisé.etics d'un numérique takeaway donc un certain nombre. Ça va dramatiser dans... et bien c'est tout, donc ceci est prêt pour l'éjection et maintenant je vais commencer l'émulator, l'émulator avec le mode console ici, vous pouvez voir que c'est commencé, donc j'ai déjà quelques machines ici, j'ai utilisé le computier dye, le machine dye, le code d'émulator et le basic et maintenant on va l'injecter, donc j'ai utilisé les deux lignes qui ont utilisé l'injection base et juste baser ici, le console, je pouvais aussi l'utiliser directement le script et le mettre, le mettre le script et juste la lancer le commentaire ici, mais juste, donc vous voyez les deux lignes et ce qui s'occupe, si vous voyez ici mon emulator, c'est d'accepter les personnages et je vais utiliser la fonction de la function de ma messe où je peux avoir un résultat plus rapide, vous pouvez aussi voir, tout va bien, le lignes est cognisé, j'ai ajouté quelques extra carrières pour le résultat car j'ai remarqué que depuis deux fois, le résultat de la compétition est élevé, donc c'est mieux pour ce cas que j'ai travaillé pour ajouter un extra carrière, ce qui sera accepté par l'interprète et c'est aussi le temps de vérifier que le syntaxe est correct, donc cette fois-ci, la compétition est assez fine car ça ressemble à un perfect moment, et c'est tout, donc je démarre le speedup et juste de la programmation, et on a cette belle messe, j'ai déjà démarqué, c'est ce que j'ai fait avant, donc j'ai compris que je contrôle cette dottedine d'alcool et que mon travail est de forcer ce bruit bleu pour échapper de la messe, pas moi-même, mais ok, donc oui, vous pouvez jouer ça, c'est vraiment, vraiment basé, basé jeu, merci. Merci à cette petite chaine de tour, donc vous pouvez récover d'autres programmes, et vous pouvez voir ici quelques programmes que j'ai récoverdées tantôt, quand la mathématique a dédié 3D curves, un simulateur radar avec aussi l'améliquité de la pièce et ce jeu de maitre est superbe. Donc pour conclure, si vous voulez essayer le tour de la chaine de tour, c'est disponible de ce repositeur, bien sûr, il y a encore un travail et j'ai dit que c'est assez basé, mais il y a un travail pour ces programmes basés, ma contribution a aussi essayé de documenter l'architecture, de l'expliquer, et c'est assez facile d'exister, par exemple, si vous voulez inclurez plus de capacités de professeur, AI, de la professeur, des choses comme ça, en utilisant une bonne libraire, ça pourrait encore être intéressant, mais dans le mode contrôl, donc c'est probablement ce que j'aimerais investir dans le futur, et bien sûr, si vous avez des idées de feedback qui veulent contribuer, c'est bien vécu. Et je vais ouvrir les questions.
Scan2Run focuses on the digital preservation of computer heritage distributed in paper form (e.g. old magasines with BASIC programs). It may be the only available format Transforming such a listing in a running computer programs and sharing the experience requires quite a few steps: retyping the program, loading it into a vintage computer or emulator, and capturing some results either in textual, image or even video format. Our talk will illustrate our current approach and progress with a toolchain developed for the NAM-IP Computer Museum to help automating the scan of old listing (including learning and reusing profiles), then injecting the result into an emulator with MAME as primary target. Our talk will be illustrated with examples from the widespread Amstrad CPC and rare DAI In-DATA Imagination Machine. It will also be the opportunity to revive and illustrate some capabilities of those nice machines ! More technically, the scan2run project (available from github) supports * OCR with interactive learning phase, forked from JavaOCR, a ligthweight OCR engine * Lua scripts for MAME injection either on a by line or character per frame basis (depending on what the machine can accept as input) * complementary use of screen/video capture and post-processing Open Source tools for documentation and dissemination purposes (Gimp, OBS, OpenShot...)
10.5446/53355 (DOI)
Hello everyone, my name is Angel and today I'm here to talk to you about my experience in making a USB adapter for the 40 year old keyboard for the Vista 80. First, a little about me. I'm a hardware hacker and programmer. I live in Montreal, Canada and have always loved retro technology. Now, a little history about the keyboard and its purpose. The Vista 80 was one of the first character generators to use a computer chip as a controller. At its core, it houses an Intel 8080a and stores data on dual 8 inch floppy. The museum says that its purpose was to create pages, lines of text or titles for television broadcasts. This specific model was used in the first broadcast in the Canadian House of Commons in 1977. In the House of Commons, it was mainly used to write the descriptions of what was going to happen at the start. The names and titles of people and the order of the day. Here are some samples from the brochure of the Vista 90 provided by MPB Communications. As you can see, it could do different fonts, different styles, it could do maps, different backgrounds, some basic 3D, it could also do technical information, some charts, scientific data, diagrams and even basic animation. It was a highly flexible device. These units were manufactured by a Montreal based company called Bytec Electronics and sold by MPB Technologies in the 70s and 80s. I may not have the Vista 80 in its entirety, I wish, but in May 2019 I purchased 3 of the keyboards from GC Surplus, a government surplus website and picked them up from the Canadian Science and Technology Museum in Ottawa. This specific keyboard seems to have also been made around 1977 according to the date codes. As you can see, this keyboard is very colorful, not unlike other keyboards of its type. I originally assumed that the keyboard had yellowed with age, but the inside is the same color and the museum website seems to mention the yellow color in their archive. It is made of a very brittle plastic that has sadly started to crack on some of the keyboards. We can see the Vista 80 badge and the MPB Technologies badge at the top of the keyboard. MPB Technologies is the parent company of Bytec and is still active today as MPB Communications. We can also see the big red power button at the top. It's there because of this keyboard, as you can see in a moment. Has its own power supply and does not take power from the main device. This also explains the power cord. Let me take the cover off so we can see inside. So here we can see the Matrix, the encoder board and a bit of a power supply. The Matrix was made by Honeywell and is custom for this keyboard. It uses Honeywell's solid state hull defect switches. The only moving part in these is a spring that is very easily replaceable, meaning that these switches, even after 40-50 years, work just like new. This board only has basic Motorola logic chips, some of which I could not find the day to shoot for easily, and some support components. Let me get this out. This is the encoder board. It takes data from the Matrix and sends it to the computer. The star of the show is the Nitron NC 2257LC. This chip was built to replace a Motorola chip from the 70s called the MC 2257L. Information on this chip I was very scared, but I was able to find a block diagram and pinout on archive.org. The NC 2257 seems to be an old USARC chip. It allows the keyboard to convert the data from its Matrix directly into a serial stream. By reverse engineering the board, I was able to find how it sends the data. First, the clock is generated via 555 timer and sent to pin 8. The NC 2257 then divides this clock by 16. It then uses even parity and once top it and sends that information to the link. The information is then sent to the 75150TC, that changes the 5V logic into minus 12 and plus 12 logic. On the other side, after changing the logic back to 5V, its counterpart chip, the NC 2259LC, recovers the clock and decodes the packet and splits it back into parallel. Now for the power supply. Under here there isn't much apart from the very simple power supply composed of a simple transformer and regulation board. The regulation board only produces minus 12V, plus 5V and plus 12V. At the top we can see a DB25 connector. Oh, give me a second. It's stuck. Yes, here. Only 5 pins on it are used. Data, ground, twice V+, and earth. Sorry. So when I first got these keyboards, I immediately knew I wanted to make one work with USB. These are so beautiful and classic. The first thing I tried was to hook my tiny portable oscilloscope to the data pin and I started making a spreadsheet, but I quickly realized that this setup was not optimal. My oscilloscope wasn't triggering, the signal was glitchy and I couldn't see all the bits. I got tired of things not working out, so I decided to take a Chinese copy of the TC2.0 and decided to solder it to the back of one of the encoder boards. The TC2.0 is a board made by PJRC. It runs at 16 MHz, has 32 kilobytes of flash, 2.5 kilobytes of RAM, and 1 kilobyte of EEPROM. The newer ones are very powerful ARM-based microcontrollers. I own many of them. I encourage people to buy them directly from PJRC as Paul Starfroghan is a great developer and a gift to the open source community. I plugged the keyboard in and used the same small oscilloscope to probe the pins to find all the pins that change when I press the keys. In total, I found 9 pins, 7 data pins, and 2 trigger pins. I had enough data so I made this. This is my prototype. It's a bit ugly, but I just needed something I could use for tests. All this is, is just a TC I had in my things. I soldered it to Voltage, Ground, and Data Pins. Next, I needed to start mapping the matrix. So after flashing a new bootloader on the TC to make it work like a normal Arduino, I wrote a quick little strip in C to output the state of the data pins on the serial bus. I started pressing random keys and I started noticing a few things. Firstly, I noticed that no matter how many keys you press at once, the only key is sent to the board. Only one key is sent to the board. I also noticed that holding in a key sends just one trigger signal, registering a single key. This keyboard seems to have 3 modifier keys, Shift and Control. The Shift key does what you would expect. It makes it so that you can access symbols and capital letters. There is a key called Shift Lock. It does the same behavior as holding the Shift key. The Control key, however, seems to make every key behave differently, but for some reason, the changes it makes overlap with other keys. I have not yet found out how this key is supposed to work, but I have some ideas. Since you can't hold a key to repeat a key, they had a key called LabeledREPT. When held, as long as another key is pressed, this key will pulse the trigger line, registering multiple characters. Instead of storing all the key codes in a single table, I decided to use the three upper bits as an index to eight different tables. This ended up working out, as tables 0 and 1 were lowercase, tables 2 and 3 were uppercase, 4 was numbers, 5 was symbols, and 6th and 7th were the special keys. The people who designed this keyboard must have decided to make it simple, as the letters and numbers were automatically in order. Each table was four bits, and therefore we had eight tables with 16 characters each. With the Arduino HID library, it was very easy to just decide if a character was uppercase or lowercase, without even thinking about the Shift key. All that was left was to assign some keys for normal modifiers you find on a normal computer, like Meta, Control, Shift, and Out. I originally settled on the Blue key, Center key, Channel key, and Alpha key respectively. With all this information, I felt I was ready to finally make the circuit board that would replace the encoder board. I started by writing down my requirements for the project. For all my previous projects, I've always favored AT-Tinies, as they are small and powerful. I also have a preference for always using the minimal, viable microcontroller. I wanted to use the VUSB library, and I also wanted to try not using the Arduino IDE for once, so I looked online and found that the AT-Tiny 2313 was quite popular, and then offered a 4K variant, the AT-Tiny 4313. The AT-Tiny 4313 uses the RISC architecture. It has 4KB of flash, 256B of RAM, 256B of EEPROM, and 18I Opens. Limited and just enough for my project. Mapping the Iopens I would need, I got the following, 7 IO for the data from the keyboard, 1 IO for the trigger pin, 3 IO's for the status LEDs, 2 IO's for the crystals, 2 IO for USB, and the last pin was reset. That left me with 2 IO pins that I used as a serial board. I opened Kycat and started laying out the schematic. One of the small problems I faced was that while my keyboard and the microcontroller worked at 5 volts, USB worked at about 3.3 volts. I got around this by using Zenerdias to reduce the voltage on the lines. This solution did make the circuit slightly more vulnerable to noise, but I haven't had any problems with that. I was feeling kind of lazy, so I didn't measure the components, and I just used what seemed right. This is the PCB I made. The only mistake I ended up making was picking the footprint that was a little bit too big for the resistors, which didn't end up hurting Indiana anyway. Here's the board I made. I could have made the board much smaller by using Sodermat resistors and making the board double-sided. I could also have used USB micro, mini, or C to make it more modern, but I instead chose USB-B as I like the big retro-ish connectors. The through-hole resistors also make it look like it could have been made at the time. Now that the hardware was done, I only needed to port my Arduino code to this. Apparently on Linux, if you don't want to mess with setting all the parameters, you have to settle for Eclipse. After installing it and setting it up, I started writing the code for this. As it was mostly the same, except I had to swap the pins around and figure out how to send HID keyboard code. The USB.com website has all the documents you need to find the code for each key. So wonderful. Once I wrote the code and the descriptor, I tried flashing it to the board. And nothing happened. D-Message told me the device was not responding. Troubleshooting time. Maybe I was just too ambitious. I started by writing a quick hello world to blink to LEDs just to make sure that the processor was actually running. And it started flashing. Good start. I then suspected that my descriptor may have been wrong. I don't remember how I created the original descriptor, so I went to the VUSB project page and saw that someone called Mikhail Holm-Olsen had posted a project where he converted a Commodore 64 to USB. I copied his descriptor, and suddenly the computer recognized my keyword. It was super slow, but it worked. I then spent a couple hours rewriting the proper key code and optimizing my code so that it didn't run at one key per second. Turns out, one of the big problems was just that I was printing at the console at each key press. That was fine on the FastTeenC, but it was too slow for this small chip. I also moved all the modifiers to the left side of the keyboard so that I wouldn't have to remember where they were. One last thing I had to figure out was to be able to tell if the keyboard was still connected. I settled in a very dirty system of sending a packet every so often and checking if I received a response. If no answer is received, the LEDs will flash until a connection is established. In the end, my code took up 91% of the ROM and 51% of the RAM. Just trying to use printf would make the code too large for the chip. This project was very interesting, and I planned to make another version that plugs into the back instead so as to be less invasive. I would like to thank the Canadian Science Technology Museum for providing me with the keyboard. MPB Communications for providing me with the brochures of the Vista AD N90, Mikhail Holm-Olsen for the USB descriptor, Kaikad for providing a free and open source software for circuit design, and all the people that helped me with this talk by reviewing it. If you have any questions, please feel free to contact me at any of these links. My talk, the slides, and the script will be made available on my website, www.jewelette.net after Fostum. Thank you very much for listening.
The talk will be about how I bought a retro keyboard for a VISTA80 from the Canada Science and Technology museum and reversed engineered it to convert it passively to USB. The VISTA80 was a machine built in Canada and was used to "Create pages of text for cable TV systems or to create running lines of text or titles for television displays."¹ The VISTA80 was manufactured around 1995-1997 and was "One of the first character generators to use a computer chip (Intel 8080A) as a controller"¹. - The history of the Vista80 - A look inside of the keyboard - The Original circuitry - Prototype - Mapping the matrix - Making the circuit board - Highlights and Lessons learned I would potentially like to collaborate with the museum to get more information on the device.
10.5446/53357 (DOI)
Hello guys, my name is Theo and I will speak about how to achieve high performance rating kills within CGRates without storing something about me. I work as a senior software developer at ITC's com, company offering VoIP and billing platform implementation since 2007 and we are focusing exclusively on CGRates. A short introduction about CGRates, it's a real-time enterprise billing suite, it's pluggable into an existing infrastructure and it's non-intrusive. I like to say that you don't have to modify your business logic to work with CGRates, you configure CGRates to work with your business logic. It's open source, all sources are available on Githau where we have also examples for both tariff plans and configurations for the most common things like number portability, fraud detection, LCR, etc. The project is test-driven development, it has more than 7000 tests as part of the build system. An important thing about CGRates is that it has a modular architecture, because of this most of the subsystem can be used as a standalone component outside of CGRates code. Each subsystem has a rich set of APIs. It's future-rich, it can be used as an online or offline charging system, it's multi-tenancy, support-derived charging and A-number rating has account-balanced management with bundles. For example, 3 minutes or first 100 SMS are free, after that you need to pay 1 cent per SMS and a combination like this. It supports event charging with balanced reservation and refund, CDR logging with support for interim records, fraud detection, LCR with QoS or bundles. For example, 3 minutes are too in consideration when we choose the winner supplier. Also CGRates offer a large set of building metrics or you can custom your own metrics. Here we have a picture with CGRates architecture. On the left side we have the input sources. All these sources communicate with CGRates through different agents. They convert the request into CGRates event and set it further to sessions to be processed. From here it is sent to different subsystems. For example, attributes to add, update or remove specific fields to routes to apply a cost to resources to allocate a channel for the call and so on. In this talk we'll focus on event reader subsystem. Event reader can import event from various sources, for example from files or from external sources like Kafka, IMQP, SQS. It supports partial CDR merging. For this event reader use the cache mechanism. Partial CDR are stored in memory until it found its payer. In case that a partial CDR doesn't have a payer to prevent memory overload, we simply removed it from cache based on a time, TTL option for example. Event reader like other agents can be configured based on templates defined in JSON format. Pick what field you want to build the CGR event. The further processing is controlled via processor flags and support regex for content mediation. Attribute S is a subsystem in CGRates that it uses to add, remove or update the fields from a CGR event. The configuration for attribute S contains two parts. One part is the configuration file for the engine where we enable the subsystem and configure other fields like indexes. And the second part the attribute profile that contains information about what field to update from the event. The selection of attribute profile can be done based on indexes or the user can specify inside the app what attribute profile to use directly. CDRS is the subsystem that handles CDR processing, can be used outside of CGRates scope. It has a full set of APIs available. It can receive the events from various sources like API or HTTP if the handler for HTTP was configured. Together with stats offers various information about all the CDRs or specific CDRs like average cal duration, average cal cost, total cal duration, post dial delay, VDD and can export the CDR in real time to event exporter without storing them. Event exporter is a subsystem in CGRates that it uses to export the CGR events to different outputs. It can export the same event to multiple outputs. We had customers that they export the same event to three outputs. Kafka, Elasticsearch and Rebitmq. Event exporter can be configured based on template fields defined in JSON format. Select only some fields for the event or CDR to be exported. It supports regex for content mediation. Here is an example of a workflow for real time processing CDRs and CGR events. Kafka server communicate with the event reader from the left side. Free switch send the event to the free switch agent and the event reader from the right reads CSV files from a folder. All three sends the request to sessions to be processed. Sessions send the event to attributes to be updated, for example to add an additional field or to remove one or to update. And attributes responds to sessions with updated events. After that the events are sent to CDRs to be processed. CDRs send also the events to attributes for additional modification if is the case. Additionally the CDR, the CDRs ask RALs what is the price for the event dash CDR then apply. After they are processed with success by CDRs they are sent to event exporter. The event exporter process the event with attributes and build the event based on templates to be exported to the specific place. This workflow is in real time and it took only a few milliseconds somewhere between 10 and 20 milliseconds. In case that the CDRs doesn't need to ask RALs for the cost, for example the CDR or the event is already built all the process is happened in less than 10 milliseconds. Here is a sample processor template for event reader. The ID is the ID of the reader. Run delay is minus 1 which means read the files once they appear in the folder. Fill separator is set to comma the default separator for CSV files but can be set to what do you want. The type of the reader is meta file CSV. The source path is the path folder from where the reader reads the events. The flag set what to do with the events after they are built. In this case send to sessions and from sessions to CDR has to be processed and log the event by default its system log. Fields set what fields the event will contains. For example the value for the first field tenant will be the value from the first column. For the field tour or type of record the value will be a constant meta voice and so on. Here is a sample for CDRS config. First we enable the subsystem. After that we create the connections with attributes, rals and exporter. Online CDR exports set what exporter to use in order to export the events. Here is a sample for processor template for event exporter. The first exporter will export the event to a Kafka server in a JSON map format. The export path is the address of the server. In this case localhost and port 9092. In the options section are set extra options. In this case topic with value cgrates underscore cdrs. Tenant have value cgrates.org attempts said how many retries in case the event is not sent with success from the first time. The field set what fields contains the exported event. The second exporter will export the event to an imqpr version 1 server in a JSON map format. The export path is the connection string used to connect to the imqpr server. In the opts as above are set extra options. In this case QID. The tenant and the fields are the same as in case of the Kafka exporter. But for this exporter it will try one time because attempts is one. Now a short recap. In the first part of this talk we present what is cgrates and a short list of features. After that we present in a picture the full architecture of the cgrates how the subsystem communicates with each other and present the path of a simple workflow. Next we analyze some subsystem and present a workflow to achieve rating use in real time without touching the hard drive. In the last part we present some configuration samples for event reader, cdr server and event exporter.
Instruct the audience for achieving high throughput online exports of charged events with in-memory data only. In this talk Teo will present the mechanisms implemented to achieve straight in-memory processing with online exports for events to be charged by CGRateS, without the need for further storing them. Following CGRateS modules will be covered: ERs, SessionS, CDRs, RALs, AttributeS, EEs. CGRateS is a battle-tested Enterprise Billing Suite with support for various prepaid and postpaid billing modes.
10.5446/53363 (DOI)
Right, so hello everybody. This is another matrix talk that we're doing for this first time 2021. And we're doing a talk on Gitter and basically how we've added Gitter into the matrix ecosystem. With me today, I've got Eric Eastwood. Hello. Give an introduction Eric, who are you? I'm Eric. I work on Gitter. Nowadays, lots of matrix and Gitter integration stuff, but we'll get into a lot of that in this presentation. Yep. I'm half-shot. I have been the long-suffering bridge engineer for matrix for a few years now. I tend to maintain the more classic privacy. Used to be the Gitter bridge, the Slack bridge, and do bits and bobs around that area. But anyway, let's talk about what is matrix? Matrix is a communication platform for inter-optual, decentralized real-time communication. It's essentially what does this mean? It means you can spin up home servers across the internet and talk between them as users. So you can have a server like Missilla.org and another server like Matrix.org. And you can have users in both these places in their own communities, but they can also talk together. And this is really crucial, especially from what we're talking about today. It provides a standard HTTP API for publishing and subscribing to real-time data in channels, basically. So it means you have a set of rooms and they are your channels and you send bits of JSON into those rooms using HTTP. And then for using this, you can do power IAM, but then also VoIP, WebRTC, signaling, IoT communication. But even more cool things like presentation software. You might see my talk last year where I gave a presentation on top of Matrix where you followed the actual presentation itself. It was running on Matrix and people could follow along using Matrix events to keep everything in sync. So there's a lot of power behind this protocol. Anything really that's got JSON involved can be used on this protocol. It essentially can be considered a real-time database of a lot of people sending data into rooms, including the ability to monitor what's going in there in the sense that you can have power levels to ensure cert appeal can't send bad things into the room. But also beyond that, you can really power any application you like that needs a network underneath it. So it could be for way more than just IAM. But we're talking about getting to today and hopefully getting to that in a bit. So the most important thing about Matrix is no single party owns your conversation. So there's no one Matrix entity out there that says, right, I have total control of what you're saying. It's yours. If someone sends you a message, you have a copy of that. If you're running your own home server somewhere, every message and every piece of data that you are given, you have the option to store or the option to remove. You don't have to rely on somebody else like holding your conversations and hoping they don't drop them. This is even more true today when we talk about the peer-to-peer stuff. But it's absolutely brilliant to have the communication protocol. Well, you know where your data is. You haven't got to wonder how much that is on my PC, how much is on the server. It's very clear cut and you have control. It's shared across all participants and all servers have control. That's another thing to talk about is that every single server in the Federation, in part of a single channel or a room as we call them, each has a piece of that conversation. And if one server goes away, it doesn't mean the conversations only disappears. Unlike other protocols, each server has a copy of that chat, which obviously means you can have nodes drop in and drop out and things continue to work, which is a brilliant system. But today we're going to talk about bridges. Bridges are essentially our way of importing data into our little architecture here. So here we've got our clients and we've got our home servers. And we talked about those already, how clients hang off home servers as users. And then you have your home servers themselves, which ferry traffic across the network. But then there's this other problem, which is how do I, as an ROC, use the Torso on a matrix? Because my ROC service is not a home server, it's certainly not a client. What we do is have this piece of glue here called an application service. And what this is, is a piece of software which can translate messages from other protocols into a matrix format. And the brilliant thing about this is you don't really need to know the fundamentals of matrix federation protocols to start writing these. All you really need is a knowledge of the basics of how matrix messages work. Like in a day's learning of the spec, really, is all you need to start sending messages around. And this is brilliant. So suddenly protocols out there, which don't talk matrix at all, can start sending messages into rooms. It could be used as a brilliant way to onboard users. Or matrix at August, we're running an ROC server for three nodes and other networks out there for years. And we are still used to days and people rely on these systems to have cross communication to protocols. And the brilliant thing about this is it means you don't have to tell your users, your community, hey, we're using matrix now, hey, we're using ROC. And no, you can't, you know, choose between those. You have to stick to one in matrix. We give pill the ability to choose what quite they want to use. If they want to use ROC, then all the power to them, you can just bridge them in. And they can have total control and talk to people over matrix as if it was native. So for example, a brilliant example here is we've got many networks today. I mean, even more so than when we wrote this slide a few years ago, how many networks there are today, which are completely siloed off in what we like to think as bubbles. Like there are just so many networks that don't have cross communication with each other. And what we do essentially is we plan to go like this, bam. So matrix sits in the middle there and just allows each of these communication protocols to talk together. And one thing we didn't discuss is just because you can have ROC, matrix talk doesn't mean Telegram and ROC can't stop talking either. So we have community communities today where people on FreeNode talk to people on Telegram, people on Slack talk to people next in PP. And then you've even got GitHub, App Services out there, which allow notifications from GitHub to appear on five different other protocols. It's all possible. Again, if you saw my talk last year, you'll see how we I think we bridged five different protocols together in one room. And it works. You get five different community protocols all together talking. And it's all seamless. You don't see a message saying, hey, please make sure you subscribe to Slack before I start talking to them. It just lets you do it. And that is truly the power of matrix, is it say standardized protocol that lets any different network underneath talk to each other, which is I think it's pretty cool. But obviously this sounds very difficult, but it's actually super easy. Setting messages into rooms is just a case of one code message or one code command space is an admission to a room. It's a bit of JSON. You send this text here, message type, text, body is hello. Also the same point, we have a room ID, an access token, as you'd expect. And then a message type, which is a room message, or very simple, negation event ID back. And it really is that simple. This is all the standard as part of matrix. That's all it is. It's essentially two JSON keys to start talking in a room, which is pretty good. So to drill a bit more into application services, you've got, it's essentially the bridge API, informally. This basically means that it's an API where the app sends messages towards your piece of bridge software sitting somewhere in real time. So it's a HTTP push transport. So basically every time a message hits the home server, it then looks at some balls and furries along to your piece of software that then falls it to a remote network. And then going back the other way, what we basically do is we give the application services a application server API, which is the API that clients use on steroids. It means you can masquerade as any user inside your namespace. So for example, for three node users, the three node bridge has access to pretend to be any single user under the app, three node underscore namespace essentially. And this means you can create users very quickly on the fly. You don't have to, you know, register them all beforehand. You don't have to hold thousands of access tokens in your database. It's one registration file to tell you to control a load of users and also send traffic to you reliably over a very secure HTTP push as opposed to syncing for each individual user. It's very fast and there are numerous libraries that let it do this for you. It's pretty cool. And so to some most matrix essentially here and bridges in general is you get a client server application service API. You get many clients server limitations. So today we have numerous client limitations and several for mobile, for Android essentially, as a few firewares, there's loads for Linux. And then also through the QT libraries out there, you get the Mac and Windows support as well. It's got a much richer ecosystem than they had before. We've got many languages support. Rust has really come a long way as well as the Go support. We've got the other thing I haven't talked about because it's barely even time, but there's end-to-end encryption support. So if you want to start sending messages encrypted to your friends about your server snooping, you can do that. That's been, that's now a stable thing. In fact, last year's foster, we've heard it on by default. So, you know, you get a lot of power there. And of course, probably the most important thing of all, you get a vibrant federation of networks, you get Mozilla, KD and GNOME of all running on top of matrix, either as a formal capacity or as a trialing stage. Last year, I believe we got Mozilla on board. So, you know, it's a pretty, pretty powerful thing that we have so many networks on our system these days. So if you were to start connecting with the community, you get all these people free. You haven't got like go up to them and say, hey, can I join in your community? It's just there. You just go to the right alias and suddenly you're talking to these guys. It's pretty damn nice. So, onto Eric, on the story of, you know, another network that joins our protocol. Take it away. So in terms of Gitter, what does any matrix to get to do? It opens up the whole Gitter ecosystem to everyone on matrix and everyone who's bridged into matrix from other platforms. So anyone can access the Gitter conversation from their client to choice. We're building all this out in the open and we're more than happy to see you copy exactly what we did or reference it. Even our first planning meeting about the bridge was recorded and available on this week in matrix. So you can go back in October and go check that out to see what we're talking about back then. In terms of adding matrix to Gitter, we're not having to re-implement the whole app to matrix. We're just sending messages across from Gitter over to matrix and making everything as seamless as possible. To give you context about what this results in, here is our live not live demo gif you can see messages, image uploads, threading, all flowing back and forth between matrix and Gitter. On the left you can see elements that are signature matrix client and on the right Gitter. If you're chatting from either side, we strive to make them both indistinguishable. You might be wondering how long it took us to get the bridge out there and available for everyone to use. After five days we were implementing the virtual user side of things. So everyone on Gitter can appear native when they're chatting from matrix. You can see even we were playing with some IRC flare next to the name at that time. By day 11 we were bridging messages back and forth between Gitter and matrix. And then just a few days later we shipped the bridge to production and allowed people to play around in a testing room. Exciting day that was. And finally just two months after kickoff we enabled the bridge for everyone to use in all public rooms. So this includes all the feature complete niceties like image uploads, threading, mentions, emails, everything's going on there now. So, you know, I mean it took virtually no time at all to get a fully integrated matrix. Two months, one developer, I guess a little bit help for me, but you know it's a pretty good feat right to have all that going. And it's been stable as far as I know. There's been no there's been no major issues, no trouble. It's just been running along, which is pretty, pretty good. But most of us all want to consider a little bit here, you know, what do we not choose to do? Because we chose in a sense to use a home server attached to our napsets. But other things we could do here was we could have embedded dendroit. And dendroit is our next generation home server made to org. Basically it's written in Go. It's quite light. It powers our browser based peer to peer work so that our home server embedded it basically into JavaScript where you can do peer to peer communications with your peers or any home servers running in the cloud at all. But we didn't go with that because it's quite a lot of work to embed a home server into a JavaScript site or server side app. And we wanted this to be really smooth and something that we can run at scale. We have never tried to run dendroit embedded into something at the scale of a Gitter implementation before. So, you know, felt best not to have a go at that one. The other thing is we could have implemented federation directly into Gitter. So, you know, actually have Gitter talk federation as if it were a home server, using the server to server API. In the end, we didn't go with that because it was quite difficult to build that in a timely manner. And it would be, you know, essentially re-implementing a home server when we already have a home server that's quite powerful. Synapse is quite happy to handle. Or it surely is happy to also handle Gitter.im. So, that's what we work with. And so, you can see here, we're basically stuck with our guns here. We're for Node.js based bridge embedded into the Gitter web app. Anyone who's familiar with the Mh6 system will know that we have a lot of Node.js based bridges, the RSC bridge, the old Gitter bridge, the Slack bridge, the Bifos bridge, and many, many others out there. So, there's no need to sort of reinvent the wheel there. We have the quite good MatrixSubsos bridge library, which is sort of like the kitchen sink bridge library these days. It contains loads of components to help you do things from room upgrades to encryption. But it also supports parsing events that are incoming and also, you know, wrangling clients and creating intents and all that beautiful stuff. Finally, we went this time with a dedicated bridge home server. Previously, we bridge as a matrix.org. We've tended to use matrix.org, the home server as the bridge home server of choice, rather than spinning up a dedicated bridge. But for Gitter, we wanted to use that beautiful domain name. We wanted users to appear as, you know, Eric colon Gitter.im. We don't want all this, like, extra fluff with the underscore stuff. So, we went with a pure dedicated home server just for the bridge, running on top of EMS hardware, which gave us quite a lot of scalability, because we can just add workers as we need to increase resources. And it's worked out pretty well so far. And again, if you guys want to get started on this sort of thing, all you're ready to do is NPM install matrix service bridge. It comes fully included. You don't need lots of extra libraries to get going. There are examples in the repo which show you how to connect to Slack webhook, but you can go much further and embed your protocol choice quite easily. And just to give a sort of brief overview of, you know, how this architecture works, we have a matrix library on top, of course, or the bridge library in the middle of Gitter sits on top and in Node.js sits underneath. And the way it works is basically handles all the complexity for you. Messages arrive in real time, you send messages as well. You can update the room or use the state with the bridge library. As I said, there are many modules for encryption storage or debugging as well. And yeah, of course, there's tie script types as every good library should. And of course, we chose to hook into existing app as Eric would not tell us. So to hook into the existing data stream of the Gitter web app, we already have real-time web sockets and fav subscriptions going down to the client to make chats appear in real time. And these are just backed by the Mongoose database hooks. So whenever something changes in the database, we can send that down the socket. And we just tapped into those same hooks to be able to bridge across any new message to matrix as we receive it on Gitter. On the virtual user side of things, we just stole a Slack spot emulation concept. You can just override the display name and avatar to represent anyone there. We're also a bit confident in this approach because we know there are a lot of successful rich feeling bots on Slack using the same concept. And the easy way to break this down is a virtual user is just a way to override the display name and avatar for a given chat message. In practice, this just means adding the virtual user field to the chat messages schema. Then whenever we see a chat message object and we look at whether it's a virtual user and use that instead of the normal author from user ID, whenever it's available. It's also flexible enough to work with other systems if you want to use this for more than just matrix. The type field there allows us to name space to any system. And whenever we see it, we can just say, oh, add some matrix flare to the side there to indicate the virtual users for matrix. The display name and avatar just show up like any other user on Gitter. So it makes it look much more seamless. Infrastructure wise, the matrix app service bridge library adds a listener to a certain port. And because we install it directly in the web app, this is running alongside all of the web app processes we already have. Click once. And then plus, even though there are multiple matrix bridge processes running at once, the load balancer in front of everything will only serve out to one of those at a time. So there's no need to worry about making a locking mechanism and duplicating messages at work there. Those matrix app service bridge listeners are exposed via the matrix.gitter.im subdomain and communicates via the to the Gitter matrix home server on EMS. So we just have an nginx config that serves that subdomain over to the matrix app service bridge port. And then we decided to host the Gitter matrix home server over on element matrix services EMS, which takes all the hassle out of setting up and managing your own server. To follow our progress, we have a big GitLab epic, which lists a bunch of issues we are tracking for the bridge. We also regularly post an update in the this weekend matrix twin blog post series that gets posted each week. And so, go ahead. Yep. So yeah, if you're, obviously this talk is very much focused on the JavaScript world because we tend to use a lot of bridges in that, but I want to highlight that we actually have a lot of support in other languages. I want to give a shout out to Chile, who has written a lot of our bridges, Taylor Grand Bridge, WhatsApp Bridge, and so on. And with that effect, there is the Python, matrix Python library and matrix go libraries are really good libraries to use if you're not a big fan of JavaScript. So there are there is support for other libraries out there as well. And yeah, it's a pretty rich ecosystem these days. And also the other thing to highlight is what we're talking about bridges, it doesn't have to be just an IAM bridge. We've got several in out there, which is a micro blogging client at Builder Top, Dendro. So essentially, yeah, the Dendro again, right? And basically, Dendro is turning into our experimental home server of sorts, where we just shut embassies or, you know, proposals onto it first to test them out. This year, we did a lovely demo where, or even an actual live thing, you can try it out if you go to that URL where basically you can send messages almost in a micro blog like format, and happy to reply to it in real time over matrix as a nice call experiment. Want to highlight one of our projects again presents is a thing that we wrote, which basically allows you to make matrix rooms into presentations where you send events in to describe a presentation, and then you send them a link to your room through the client and suddenly everyone can follow your presentation in real time. Again, it's another cool use of IAM and I'm hoping at some point someone will build a Google Slides bridge, so presents to save me the house of using this. And finally, when I throw something quite fun in there, we've got a battleship widget thing, which you could basically run through matrix again, which uses matrix events to send messages between each other and then I guess play games over it. It's pretty damn nice as well. So I want to talk about things that are coming soon, because there's a lot of stuff which Gitter needs. You might be saying, you know, it's all very good you've bridged Gitter, but when you eventually decide to, when Gitter eventually decides to retire, you can have all the features we have in Gitter appear on, you know, matrix. So we're solving this in several ways. So one of the prompts we currently got is all the Gitter bridged rooms have no previous history. We don't have the ability to show history in the rooms from before the point they're bridged. That's going to change as there is a feature coming up called history merging, which basically allows us to backfill history from before the chat room was even created to include history from the remote network. And that's going to be really useful, not just for Gitter, but for Slack and many other things. We're also going to add the ability to do peeking, which allows you to look into a room before you join it. Obviously, Gitter has this lovely feature where you can go to a room immediately to see the history about even creating an account. Matrix would love, we've had guest support and some people would love to have the ability to say, I want to look at a room before I actually join it, which would be quite fantastic to have. We want to have threading support. Again, Surly is that client I talked about earlier, which shows the micro blocking concept, but I'd like to bring that more to the forefront of matrix so that suddenly threading can be done in a beautiful dynamic way. And it'd be really nice to actually see that work in other clients. Account portability is another nice thing to have where the people on Gitter at the moment would like to be able to take over their bridged account. Because remember, at the moment, Gitter uses on matrix represented as virtual users. We'd like for Gitter users to claim those and say, hey, I'm this guy, and I'd like to have all of his mischiefs in his rooms because I actually have them. And so we're adding that a bit to matrix at some point. After that, good static archives so that you can view rooms of our hands to log in, which would be a nice thing to have. We've got this with matrix static at the moment, but it's not particularly pretty. And it was a juice up project with an area gone anywhere with it'd be nice to have that as a full time thing. And then afterwards, social log in to log in with your GitHub account or your GitHub account. Canonical DMs would be nice so that in matrix, you might have realized that it's possible, at least used to be possible, to create multiple DMs between people. We'd like to fix this more prominently by having Canonical DMs, i.e. one DM per two people. So you don't end up 1000 DMs, which would be nice. And then finding more generally, we want to improve the UX in matrix. You might have seen this year that or sort of this year in laps under element that we've shipped the ability to do bridge state so that bridge, bridge rooms are a bit more prettier UX and links back to the bridge side of the room and sort of show you information about the bridge. We'd like to make that more first class into element if at some point. And so finally, I want to give a bit of a shout out and say come source in bridges at matrix.org. It's a great community to talk about your bridge prospects where you can come to us to talk about whether something seems feasible or your project or you know, if you've got a spare protocol lying around, you'd like to bridge in, I don't know, be a fantastic time to come find out how to do it or to show us what you've done. We'd love to have you there. And so thank you everybody. Thank you for listening to this talk. Sadly, we couldn't be there in person this year, but it's going to be pretty good. I reckon the stuff that coming forward with this could be pretty exciting. And I guess you can ask those questions in the Q&A session. And that's it for me. See you everyone. Bye. you
Matrix is an open protocol for secure, decentralised communication - defining an end-to-end-encrypted real-time communication layer for the open Web. Historically the network has been made up of newly written native Matrix clients, or bridges to 3rd party existing chat systems (e.g. Slack, Discord, Telegram). This year, however, we added production-grade native Matrix support for the first time to a major 3rd party chat system: Gitter over the course of about 5 weeks. This talk will explain how we did it it, and show how easily other existing chat systems can extend their reach into the whole Matrix ecosystem; breaking open those walled gardens forever. With the European Commission proposing to mandate interoperability for big tech "gatekeepers", it's never been more relevant to understand how best to expose existing communication silos via open communication APIs such as Matrix. Between October & December 2020 we happened to go through precisely this process - designing how best to make Gitter natively speak Matrix such that Matrix users can natively participate in all Gitter conversations, using it as the reference example of linking an existing large-scale chat silo into Matrix. Do you try to natively speak the Matrix server-server API? Or do you embed a homeserver (if so, which one?) and use the Application Service API? Where do you store the data, and how do you minimise duplication between your existing service and Matrix? How do you scale to a userbase of millions of users? How do you handle end-to-end encryption? In this talk, we'll answer these questions; and show how we're at the point where Matrix has matured enough that it could actually be used to add open communication APIs to the tech giants if they ever found themselves in need.
10.5446/53368 (DOI)
Hello everybody, my name is Scott Godin. I've been working in the telecommunications industry and VoIP space for the last 25 years now. I originally got involved in the precipitate project back in 2004 and have since been using it in many different projects. You know, ranging from your standard SIP based prophecies, back-to-back user agent soft phones, all the way to completely or more customized SIP work in the two-way radio space where they're using SIP-du-querie radio protocols as well. The flexibility and solid design of the precipitate stack has played a vital role in the success of my projects. And based by consulting company SIP Spectrum around helping industry leaders to design and implement VoIP projects, most of which have been with the precipitate project. A key driver behind my love for the stack is really that it was created by the IETF authors of the SIP protocol, names such as Robert Sparks, Adam Roach, Roan Mahi, and Colin Jennings. So you can be assured that the SIP protocol itself follows the RFCs very closely. I also like this written in C++. Daniel Polcock, who will be delivering the bulk of today's presentation and myself are currently the main project maintainers. And Daniel will tell you more about this stack. I hope you enjoy this presentation. Thanks. What can people tell me about reciprocate? What comes to mind when I mention reciprocate? Sorry? Good architecture? Anything else? SIP compliance? Okay. Monkeys. Yes, we have monkeys and lemurs and baboons and all sorts of dangerous animals. So you go with my dangerous toys that are floating around the room. So a little bit about how we work. The project is primarily developed in C++, but we also support some Java and Python interfaces now. It's cross platform. So I do all my development on Linux, but one of our other major contributors, Scott Godin, works on Windows. So both Linux and Windows are very well supported. Because of the architecture and the very high quality of the code, it's also easy to support other platforms like Android and cross compiling. It's a BSD style license. It's actually the vocal license. So this affects the choices you have when you release binary products and whether or not you release source code and how you have to attribute that work. There are packages for many distributions like Debian, Ubuntu, and Fedora. Who uses packages for things? Yeah. And this can be convenient. I'm also a Debian developer. So I can upload backports of the package for the stable Debian release. So you don't have to wait two years for the next Debian release to get the current version of reciprocate. You can get it from backports. Usually two to three weeks after I make a release upstream. We get primarily with GitHub. We have continuous integration with Travis. Who's been using Travis or Jenkins or another continuous integration tool? Yeah. And they're very convenient for detecting problems in contributions, which is important in a community-based project. We have lots of unit tests. And these were developed before we had the continuous integration systems in place. So now that we have the benefits of free services like Travis, all our unit tests can be run over and over again. Yeah. When we wanted to cross compile for Android, we could run our unit tests. We have lots of them. And that's a important way for people to start contributing as well. Some reasons to choose reciprocate, which may be part of a project or it may be for your whole project. IPv6 works really well because it's always been there. We didn't retrofit IPv6. So if you're going to encounter IPv6 at some point, if you haven't already, you'll be reasonably safe with reciprocate. The TLS. This is also something that's been implemented very well in reciprocate. So if you want to run SIP over the real internet where you have to deal with things like NAT, you can use TLS to help it get through those firewalls. For several things that I do, I use port 443. So it looks like HTTPS traffic. It also supports federation using domain validation in the certificates. And all of this is already pre-configured when you run the SIP proxy. So you don't have to manually reinvent the wheel each time you deploy it. The default is to federate. But you can change that and customize it if you need to. It's extensive coverage of things from the SIP spec and from other RFCs that have appeared since then enhancing the SIP protocol. You can use low-level APIs directly. So if you want to write code for passing messages, you can use the parser directly. If you just want to make a user agent and you want to use a single class to access it or to create a user agent, you can do that as well. And that hides all the difficult stuff behind the scenes. You can also run the processes without compiling or modifying anything yourself. There are many different processes which I'm going to cover in a moment. It's a very generalized architecture which has been mentioned already. When we needed to add WebRTC, we only had to add a couple of classes because the architecture for transport code is very generic. So we just added WebRTC classes and they fit within the existing paradigm. And it's the same thing for adding other things. If you want to add another authentication back end such as radius, I mean I added radius myself a few years ago, you just add a class and it drops into the existing code. You don't have to modify things in a whole lot of places because every aspect of the stack has been designed to be customized. So let's look at an example of that low-level API. In this case, I'm going to talk about a handler script that takes SIP messages from Homer and pushes them through a message queue and then a little C++ process is running, taking them off the queue and in a try-catch block, it just asks the message parser to pass the message. If the message is bad, an exception will be thrown and the code can report that or log an error or whatever. So this can be a useful way of examining things outside of your SIP architecture. So we can call that parser API directly. Just to show how easy it is to pass a message, this is taken from one of the unit tests. The first, I mean, 80% of what's there is just a string containing a SIP message. This one line here passes that message, passes the string into an object. It will throw an exception if something is wrong. The final line here takes the method from the object and displays it on standard output. So that demonstrates how to access headers or how to access the request line with C++. So a high-level example, we have a sample user agent called testUA which demonstrates how to build a conferencing server. It's basically one class for a whole conferencing service. It has a command line interface, so you can interact with the participants. You can invite people to join or cut them out or change the mixer settings on the fly. And the same API can be used to build a soft phone, a voicemail server, a B2BUA. You could even use it in a hard phone solution. So the user agent class and the conversation manager class are the two classes you can use to build things with the high-level API. And here's an example. I've just taken one of the methods from the API and demonstrated how when an incoming call arrives, your code is notified about it. So it's event-driven. And you can decide whether to answer that call immediately or whether to defer answering. It might depend on a user interface before answering, but in this case, we just answer that call immediately as it comes in, which could be a useful way to build a conferencing server. So there are a range of callbacks like this for the events that take place in the service. So some things that have been built with reciprocate is repro, which is a SIP proxy. It's very different to Camilo. Repro is a lot more concise. The number of things you can do without modifying the code are quite limited. But on the other hand, it just works out of the box for a lot of basic situations. So if you just want to run a simple SIP service for a few users to connect to, you can install the repro package and just start using it. You just have to put in your domain and your users and you can just get up and running quickly, including WebRTC and TLS. We have a turn server, which is called return, recon server, which is a basic B2B UA or SBC, the music on hold park server, which can serve music on hold, and the registration agent. This can send a register to a provider of your choice, telling them to send calls to your SIP proxy. Most SIP proxies can't send a register by themselves. You need to run an asterisk or something to send register messages. But using the register agent, it's very easy to send register messages without running a full PBX. So this is an example of the repro web interface, adding a root. You can use regular expressions in the roots. You don't need to have any roots. If you just add users to the system and they dial each other, it will automatically root those calls between the users. You only need to add roots if you want to give them the possibility to make outbound calls. And there's more. These are some of the more obscure parts of the project that people are less familiar with and maybe they're a little bit less tested or incomplete. We have eye chat gateway, which is a gateway to XMPP. So this can be linked with something like ProCity or eJab or D. And telepathy reciprocate, which links with the telepathy framework on the Linux desktop. And so that allows you to create soft phones on Linux very quickly. And SIP dial. Once again, the very simple standalone utility, like the register agent, it knows how to send a refer to different types of phone like Polycom and Cisco and Linksys. So that each type of phone will dial a number. And you can send the refer through the proxy. It doesn't have to go through a PBX. And the various phones will react to those messages and start dialing. So this is an example of known empathy, which is a Linux soft phone and how it can use different backends. And you can see on the left, there's an architecture diagram. And the different backends are at the bottom row. So you've got things like XMPP or SIP and reciprocate fits here as a SIP back end. And the front ends are all developed by other developers. And by providing a back end, we don't have to work on the front end. We can let other people develop those and interact with reciprocate without even testing against reciprocate. They can test against another back end. And then we can drop in reciprocate and it should just work. So that's a very modular architecture. On the right here, you can see the buddy list and actually making a call to another user that's been found in the buddy list. And that whole buddy list is generated by the empathy front end. That wasn't developed as part of reciprocate. So what next? You can mix and match. You can take something that you saw today, like the XMPP gateway and use that with Camillo or with another SIP proxy. You don't have to use everything from reciprocate. Because of the very high adherence to the standards, everything should interact with products from other vendors, including Camillo. If you do want to get started quickly using the packages, see the RTC quick start guide. That's a complete step-by-step guide. And if you're interested in using reciprocate in a product, you're very welcome to contribute unit tests to help test the features that you depend on. So that when we make changes to the code, if your unit test reports a failure, we know we have to fix that before the next release. You can use telepathy to do a C plus integration. Right, so our Q&A right now. Do we have any questions in the chat? Not at the minute. We don't seem to. So, let's go. It's good to see you again. I am going to do this, unfortunately. So, I do have one question, though. Reciprocate has been around for, I think, everything they can remember, working on, well, on VoIP, actually. So, what is next? There's not a lot of movement going on in the SIP world, is it? Well, I just looked up the new RFC today, the one for push. So, what's happening in reciprocate? Okay, so at the moment, on the low level, sort of on the development side, we're exploring things like moving to CMake to better improve the integration between the Windows builds and the Linux builds. At the moment, we have to maintain two different build systems, so one for Visual Studio and the other being Auto Tools. So, we're looking at consolidating that with CMake. That won't make a big impression on the users, but for developers, it just takes a little bit of pain out of the development process, is when we add a source file, we currently have to add it twice. So, anything we do to streamline the life of the developers leads to people developing more, hopefully. Yeah, I mean, 100%. I think anything you do to improve developer experience is just playing better. Yes, we have a lot of these things that are sort of partially complete, like in the video we just saw, the telepathy reciprocate integration is not complete. It's quite interesting because it's not complete. There's not a lot of code, it's just the bare bones that let you see how to use the telepathy bindings with a C++ application like reciprocate. So, it's an interesting place for someone to come in and see how those things work together. It's great for a developer who wants to learn about telepathy. On the other hand, it's not a complete implementation yet, so it's not great for users. But if you're a developer and you want to see how you would start integrating another product into telepathy, this is a good piece of code to look at because we've only started by doing the most essential methods. Right. And in terms of looking at it from a completely different angle, servers that are running reciprocate, is there any news in there, any new piece of software that has come up? The main thing that people seem to be using, either the repro proxy, which is a proxy, which tends to be fairly stable, people don't add a lot of extra functionality in the proxy. They just use it as it is. And then they implement their functionality in something else, you know, further back in their network. So, the fact that the proxy is simple and it just works is what people like about it. They don't want it to change too much. They just want it to keep doing the same thing. On the other hand, there are people building various session border controllers with reciprocate. Once again, they build their media applications behind that. So, the nature of reciprocate is that they don't have to put their media code into the reciprocate stack. They can do all the media stuff behind it. Because it's very modular, things can be separated the way they want them. And people who use reciprocate are really happy with that model. So, they don't need a lot more from it because they do that in their own applications. And they can just join them together in the way that's most convenient. Another big thing that's going on is at the moment, I mean, for more than 10 years, we've been running MediaWiki, Bugzilla, mailing lists with Mailman. So, a combination of these traditional applications that all need to run on a Linux server, we're looking at ways to move beyond that. For example, whether we can put our wiki pages into the Git repository. That would be interesting because when people change the code right now, they have to go somewhere else to maintain a wiki. But if we can have that documentation update in the same commit where you change the code, developers are more likely to keep the documentation up to date. Yeah, so I think that because that's been the way it has been for more than 10 years, it's like it's fallen behind a lot of other projects. So, we're going to take this big jump and we have a bit of time and space to look at it and decide on a strategy to change all those things in one go, perhaps. Right. I think maybe I don't, because you mentioned that it has somewhat stagnated or something. My impression from when I was working on Seb was not that precisely, but more than Reciprocate was used in software, maybe it used to and not any longer, I don't know. It was proprietary. And so you didn't get the bus that it deserved, let's say, because you wouldn't see it. Is that still the case changing? This is an interesting question. It is deliberately a permissive license. It's a BSD-style license, the vocale license. It's not a GPL project. So, people don't have to share their code. People who use Reciprocate, they like that. So, it's not a GPL project. So, we don't expect people to give everything back. What we do try to do is to encourage people to help us to fix the low-level stack, because that improves the quality for everyone and we all need to work together. So, if you're running something that's built on Reciprocate and I'm running something else that's built on Reciprocate, we might not want to share our media code with each other, but we want our applications to talk to each other.
Discussion of the most recent release of reSIProcate, how to use it and how to get involved.
10.5446/53369 (DOI)
Hello everybody, thank you for watching this video. My name is Alek Agafonov and I'm CTO and co-founder at Cip3. Today we will learn how to build Cip3-based solutions and also work through a real one-gearifrot detection use case. But before we start with the topic, let me say a few words about Cip3. Cip3 is a very advanced voice over IP monitoring platform. Our mission is to provide a ultimate visibility of voice over IP networks. Even though the very first version of Cip3 was released just three years ago, it has already become an irreplaceable tool for many support, maintenance and development teams in telecom companies of different sizes and business orientations. Moreover, there are already some cases where Cip3 is used as a collaboration tool, some sort of a glue between technical and management departments. Let's see what makes Cip3 so attractive. The very first thing every successful telecom company needs is monitoring. And here with Cip3 you will have a full house of Cip and RTP metrics which you can use to build all required quality of service dashboards. All the Cip3 metrics are multi-dimensional. That means you can configure the monitoring system to reflect parameters which are really important for your business. And the last but not the least, out of the box Cip3 ships metrics to dozens of time-series databases and monitoring platforms. For instance, if you think that Datadoc is a great monitoring platform and is actually great, you can stay with it and take advantage of both Datadoc features and Cip3 metrics. The second thing every successful telecom company needs is an ability to find all the required information about any call or transaction within a few seconds. That's why in Cip3 we created a widget called Advanced Search. Advanced Search was inspired by the Wireshark query language. That's why it's very intuitive for telecom engineers. With Advanced Search you can build pretty complex queries by using four main groupings of search attributes, combining them with five main search operators. Every call you find with Cip3 has enough information for efficient troubleshooting. Due to advanced call correlation logic, you can see all the signaling and media call legs on one and the same sequence diagram. Moreover, you can dig deeper through this diagram if you want. For instance, if you want to get a granulated media quality of service report. And of course, all the related information can be exported as a pick-up file for further analysis and in case if you want to share it with interconnection partners. All the features I just mentioned is something that we call CipCore. However, there is a hidden part of the iceberg, which is called Cip3 user defined functions. Cip3 user defined functions were inspired by Kamailo and its Kemi framework. It is a small script written in Gruy or JavaScript, which aim to extend basic functionality. Today I would like to share with you a few recipes. Some of them will be a bit artificial and some of them are just ready production solutions. The story of Cip3 user defined functions begins with CipMessage.edf. I think you already got it that Cip3 will call the function for every Cip message. So let's see how we can use it. Cip3 aggregates call legs by matching A and B numbers. However, in real voice over app networks, we have tons of situations similar to the diagram to the right. Where, as you can see, SBC does formatting of international United States number to national United States number. Also, as you can see, our SBC works as back-to-back user agent and has different call IDs for each one of the call legs. So how to help Cip3 to correlate this call? The easiest option is to add an additional X call ID header to the second call leg. But as you know, it's not always possible due to various reasons. So let's try to do it with user defined functions. Take a look at the code snippet. As you can see, we read the from header, match it to a certain regexp and set a special caller attribute, which we call service attribute to the packet. So far so good. Now we have our calls correlated. Let's see what else we can do. This example is a little bit artificial, but it will help us to get what will happen next in the presentation. Let's assume that our company is suffering from Cip hackers who use only number 100. I bet you met this number too. The question is, can we somehow see all the call attempts from the number on a separate dashboard? And also, can we have an easy way to find such calls? Sure, you can do it by using user defined functions. As you can see in the const snippet, it's very easy. We just check if the from header matches a certain regexp, the same as in the previous code snippet. But this time we add a robocall attribute. We call such attributes user defined attributes. Once you define and assign it, Cip3 will automatically provision it to all related metrics and also add the attribute as an advanced search option under the Cip grouping. Now let's take a look at more advanced and less artificial examples. On the code snippet, you can see how to parse an identity header if you work with Steer and shaken and would love to have more statistics related to this protocol. Integration with ApiBand from frapp postner is another great example, which you can find as a tutorial on our documentation side. So what's happening here? We read the entire ApiBand database in a sync mode and put it in a set of blocked addresses. Now if we receive a Cip message and the source address is found in the list, we will assign two user defined attributes blocked as a Boolean and blocked address as a String. Here you can see search results and the chart of all the call attempts with the block attribute equal to true. Unfortunately, I think you can't see the small font, but our instance is perfectly secured with ApiBand. And states of all call attempts are either unknown, basically blocked by AP tables or unauthorized, blocked by signaling node itself. I hope at this point you already found user defined functions as a very useful feature, but can we do more? The answer is yes. Cip call user defined function is a function which will be called write for each call. And if in Cip message user defined function we had a Cip message content as a payload, here we will have a real-time call detail record with call state, call setup time duration, and all the call related parameters important for analysis. Here is a very small example of how we can mark all the calls with setup time bigger than our SLA threshold. It's pretty similar to what we've done in Cip call UDF. And I'll repeat it again, it's very clean and simple. So now when we know what Cip user defined functions are, we can finally get to the second part of our topic and hear it the real customer story. There was a voice over IP provider, I would say, and never criticized. In terms of traffic, we had around 1000 call attempts per second, which is at around 20 million call attempts per day. A few months ago the amount of call attempts started growing. However, the amount of active clients stayed the same. With the growing number of call attempts, this provider had to choose either to buy licenses from its voice over IP hardware and software vendors or to introduce trolling policies for some of its clients. Also, this provider started noticing more and more of a giriff rot activities while troubleshooting support tickets, and that's what the moment they came to us and asked if we can find all the Vangiri clients and help to estimate their presence within the network. Just a few words about Vangiri fraud. Basically, it's auto-dilers who call you and hang up immediately. Their idea is to make you call them back. I'll skip the motivation topic because it might be very different. So how did we decide to find them? The idea is simple. We use cip call user defined function to send real-time call detail records to a microservices called profiler. Profiler will gather all the features as they call it in machine learning and put it all together in CSV files for analysis. Then we will use TensorFlow to train a model and connect this model with the voice over IP provider's API. Unfortunately, the time I have is not enough to share all the details. That's why I will show very briefly what was done at the first steps. As you can see, it's just a few lines of code and you have real-time call detail records sent via UDP socket. Perfect. Actually, just a few words. In our user defined functions, we use framework called Vertex. And out of the box, this framework lets you to send data using webhooks, integrate with databases and message brokers just in a few lines of code. Now let's go to profiler. Now when we have data coming to the profiler, let's see what features we need. First of all, our customer is suffering from growing number of calls. That's why we will put total calls as a feature. Also we want to know the difference between total duration and charged minutes. Because and it's very important moment, we know that all outgoing calls shorter than three seconds are free. Of course, we want to know the amount of failed, cancelled and unspared calls. And the last two, which we consider as main one-gearly behavior features, the amount of calls terminated by color plus the amount of calls being less than three seconds. So basically three calls. Now let's put it all in the CSV file and try to assess. Here is the statistic for outgoing calls sorted by total calls. Look at the first row. It is certainly not a one-gear reclined. The most of his calls are failed and the amount of cancelled is in the same number with the amount of unspared. Plus, he is certainly willing to spend more than three seconds on call. So you can see the top 20 of our list have the same profile. So it's clear that our voice over AP provider has problems not because of Fungiri, but because of something else. Elf keeps the troubleshooting part and will just say that the amount of growing call attempts was caused by misconfiguration on both sides. Yeah, it happens very often. Everybody was notified and the amount of call attempts got back to normal. Now you must probably ask me, Oleg, what the heck? You promised us to show how to detect Fungiri fraud attempts. Yes, I did and I will do. Look at the row with red color. 34k cancelled calls with almost 5k answered and almost all of them were terminated by caller and were less than three seconds. And here is how our statistics looks like if we sorted by three seconds call rate. Here are our Fungiri clients as I just promised, one by one. Now I hope you enjoyed this use case and would like to try XCIP3 and its user defined functions in your own voice over AP networks. Thank you for your attention. Please visit our website and GitHub page. Find me on Twitter as Agafox and also join our community channels in Slack and Telegram. Typical thing. All right. We're live in no time. So let me switch to the talk. Well, hello, hello. Hey, thank you for watching us. Yeah, thanks a lot for a great presentation as usual on XCIP3. So well, I'll start of course by asking what is next on your roadmap? What is your vision, if you will, to take this tool to the next step? I see that one of the things you've done is integrate with ApiBand project that my good friend Fred Posner, shout out to him as well, that has been around for some time. So always it's nice to see synergies in projects that are both in the same space and you know, made by like minded individuals. So always good to see that. And yeah, so what is, what are your plans in the short term or maybe if you already have a plan for maybe a longer term, always good to share with us? Of course, modular the universe, like things change. So we end up. Okay, I think that from solutions perspective, we can just look for more useful integrations and now to presentations ago, I saw that open CIP implemented a new API, which is called API and maybe we will integrate with it as well. Looks pretty good as an example. And also we are going to work on media features. We already report our media aggregation engine and now we can show and search for all the calls with one media, one way media or without media. So basically all these things which bother people a lot and new feature which will be released in one, two months is media recording on demand. For instance, if you have a call and CIP three in real time sees that your quality is bad, it will record all your RTP traffic and will provide you a full information for the rest of your calls. So basically we will know what's happening there and to be GDPR compliant, we will record optionally not entire RTP but only let's say RTP with dummy data inside. So basically you will be able to analyze this thing in a VAR shark and at the same time you will be RTP compliant. Sorry, GDPR compliant. Yeah, so things like that. Was thinking of a more of a less GDPR compliant case, I was wondering. So how about sentiment analysis of a call, let's say. What about that? What do you think? We have ideas about that. Let's say that we don't have enough hands at the moment to dig into this direction. Do you think there is value in that like mixing for example, there was packet loss in this call and at the same time we detected that this participant was angry based on our audio machine learning something. I was thinking that we usually say one of the same phrases when we have bad quality, for instance, I lost you or couldn't hear you or stuff like that. So basically I was thinking that we can train a machine learning model just to analyze if we have these phrases within the call. So that was one of options. Of course, this problem with this is this is you know, they look cool, I guess, when you see them in motion. But when you have a second thought, they're like, wait a second, how is that implemented? Well, you need to record everything, right? So it's a bit more scary. So I really like what you mentioned about doing the recording with dummy data. So you have like your RTCP packets that contain no media, but still contain feedback. So it allows us to, well, allows you to know the quality without actually having bits of the call. I think that's really, really clever. Are you aware of any other solutions that use this technique, for example? I think that at the moment, nobody has implemented such sync. So I would love to say that we are the first who will do it. Yeah, I mean, it's GDPR has lots of ramifications, right? And implementing certain services to be GDPR compliant is hard. So these interesting solutions to the problem are always, yeah, they use scratch your head and think about those. And I was looking at your presentation, I know there's some, some code snippets I asked you in the backstage, like what are those mentioned, your groovy, maybe tell us a bit about the software architecture, how is, yeah, CIP3 put together, let's say. Well, CIP3 is a microservices solution and all the backend components are written in Kotlin. So basically they are a part of one and the same ecosystem. And we use Vertex framework, it's a well-known framework from Java world. Just to put it all together. If you are familiar with Node.js, Vertex is basically it's Node.js on steroids in terms of architecture. And we use Vertex to provide this user defined function feature. So basically all the languages you can use writing code in Vertex can be used to implement user defined functions. And basically it's all the GVM supported languages, JavaScript, Ruby, JRuby version, Kotlin, basically all of them you can read in the documentation. Mostly we use in CIP3, we use Groovy and Kotlin just because it's easier for us. I see. So this would allow someone to sort of like, what kind of person, what kind of, well, a user slash developer depending on how you look at it, what can they do in this user defined functions? Well, as I just showed in the presentation, they can add, let's say, extra value to their monitoring and troubleshooting. Just because you can assign additional attribute and mark a call which had the bad quality or which had some problems or maybe which was, okay, the good example actually. I mean, those sound like something you would want out of the box, right? Label me the bad calls. Sounds like something you would always want to have. But something I've been more creative. You don't know what is the specific, right? For instance, I can be a company doing voice over IP, but at the same time, I'm doing voice over IP with my messaging application. I mean, just an application using cold API, doing like CIP calls. And for me, it's important to monitor separately each version, each platform, like Android and iOS separately and this version separately and that version separately. I hear you. By default, you don't have it in any monitoring system. So you can just apply a simple script and emphasize all the important parts you want to monitor. Okay. I see now. It's going to be like applying kind of labels, if you will. Yes. All these different flows. You apply additional tags and labels and you use these labels. They are provisioned automatically and you use them to build your charts in monitoring. And at the same time, you can use them for queries in troubleshooting. Very cool. Since we have had them also in the room for many times, do you think that CIP3 and Homer have any point of intersection or they are completely parallel tools that won't mix? We tried to find some, let's say, things to collaborate on. We couldn't so far, but I'm looking forward and I think that it's good that we have them because it's a great motivation for our team to implement our stuff and our vision and at the same time to see what they are doing. Especially as we said, I think three years ago, it was done in 2018, that it's great that now we have CIP3 and we have Homer is having Kamailo and OpenCips and having free switch and Tasterist. Right. So there's a benchmark. There's some healthy competition there. Yeah. Makes sense. Can't argue with that. Well, thanks a lot for joining us, Oleg. Much appreciated. Thanks for some good content here. Thank you so much. And we hope to have you next year then. Absolutely. Thank you. Thank you. All right. Cheers, mate. Thank you.
SIP3 is an advanced monitoring and troubleshooting platform. It recently released a few very powerful APIs which you can use to build your own telecom solutions. In the presentation I will show how we used these APIs to implement Wangiri fraud detection service.
10.5446/53382 (DOI)
Welcome everybody. Thank you for joining my talk. My name is Christian Krivik. I'm an engineer at CoreLite where I'm part of the open source team and work full-time on the Zeke network monitor. Let's get started and let me share my screen. So today I'm going to be talking about Community ID, which is a standardized approach for flow hashing across your network security monitoring tool chain. Let me spend a couple of minutes motivating the problem. So if you've ever used Siricata, you might be familiar with the kinds of logs it produces. And here on the left hand side, we have a snippet of its eVlog that captures an alert that it flagged. And on the right hand side, we have a bit of HTTP logging that it is also capable of producing. And if you want to later on correlate some of the activity in those logs, it's really easy because Siricata flags every flow that it encounters with a unique flow identifier. In Siricata's case, it looks numeric and you can see in those two snippets that the identifier is the same. So you know it's referring to the same kind of flow. So that makes correlation really easy and that is cool. If you move on to Zeke, which you might know from its logs, on the left hand side here, we have a connection log and on the right hand side, we have an HTTP log. Then the same holds. Zeke also puts a unique identifier for every connection, for every flow it encounters in the traffic into its logs, making correlation really simple. So here the actual identifier looks a little differently because Zeke computes it differently. But the usability is the same. You can easily correlate across those flows. And that is also cool. Now, if you start working with both of those systems and if you're a practitioner, you might know that that is a very common setting, then you'll notice that you have a problem because those identifiers are computed differently and so they no longer match. And so as a workaround, you would basically have to identify, in each of those logs, how exactly a flow tuple is expressed, including all of the various little idiosyncrasies, and then basically abstract from that and come up with your own way for implementing the comparison and make sure that you don't care about directionality and so forth and identify things the same way. And that is really not so great to work with. So I think you can see where I'm going with this. We have good news for you. There is a better way to do this and it is called community ID. So community ID is a standardized algorithm, really simple for hashing flow tuples in multiple different network security monitoring applications. Let me spend a couple of minutes to explain how that works. It is intentionally really simple, almost embarrassingly simple. The result of that algorithm is just a string. It consists initially of a version string. Right now there is only a version one, so that part is only ever the number one followed by a colon followed by the actual computed hash. And that takes two parts as input. The first is a seed. The seed value is there so that in case you're facing multiple networks and you want to ensure that community ID values that you compute in both of those settings can never collide, then you can use different seed values in those two settings and rule out that possibility. Then the next part of the input is just a five tuple that is constructed from what you would expect, namely the actual source destination IP port and transport protocol. And if you look at the spec on our website, it spells out in quite a bit of detail how exactly you need to serialize, render that five tuple into the hashing algorithm. The hashing algorithm itself is SHA-1, which is really sort of the most basic thing available. We did that intentionally. It was the simplest thing to use that is in lots of crypto libraries, standard libraries, and so forth. So really one of the most accessible options. And in order to compress down the rendered hash result, we just run it through base 64 to make sure that the string itself becomes a little bit more manageable because it's just a little bit shorter. So that is just sort of a visual compression, if you will. And that's really it. So this is incredibly straightforward. And let me show you how that looks in practice. So we're going back to the same log snippets that I showed earlier. You have the Z-log on the left and the Surya catalog on the right with those flow tuples that the systems natively produce, but that don't quite match. And now if you enable community support in both of those systems, then you get an extra entry in those logs, community ID, and you see immediately that those strings match. And so the correlation task is trivial. It's just a string search and comparison. And you know, you're now looking at entries for the same flow. And that is super awesome. All right. So let me expand on a couple of the standard use cases. Some of I've already sort of touched on, but the most common one is probably really that you have multiple different applications installed in your network that are all producing logs where you would like to simplify for flow correlation. And you can do that by basically enabling community ID so that those strings get produced automatically. You can also use it for same correlation. So if your sim ingests all those logs, then the correlation task is really much simpler because now in your sim, you just need to do string comparison, which is easily supported. Another application that is sort of perhaps a little obvious, but that is also worth flagging, is if you have multiple installations of the same system in different places, then you can intentionally configure them such so that community string computations always match. You could potentially use the native algorithms that the systems provide for this as well. But if you get used to using community ID, then in a way you're more sort of future proof because if you then embrace its use in other applications, then you don't need to change your implementation. You can just continue to use the same string comparisons. There are other sort of more funky applications. The next one here was flagged to us by folks at ESNet. Thanks. That was actually pretty cool. They face a lot of asymmetric routing. So they see one direction of a flow go in one place and the other direction in another place in their network. But since the community ID algorithm is not sensitive to directionality of the traffic, the resulting hash values are the same. And so you can still use community ID in those cases to identify the fact that the same flow was present in those two locations. That's kind of funky and not something that we anticipated originally. And then of course, you are not required to apply community ID at the point of log generation. You don't have to use that in Siricata and Zeke and so forth. You can create those strings in post-production, if you will. You could ingest the logs and then create those strings for future comparison purposes, which is just as feasible. There's nothing that requires that algorithm to run as traffic is observed. So these are just a couple of examples that I hope you find useful. The current status. So the spec, reference implementations, reference data, if you want to come up with your own implementations and verify that the algorithms are correct. And just a snapshot of the current status of support across different systems and languages is always on our GitHub page. This is just github.com, correlate community ID spec. And let me highlight a couple of things. So in terms of language support right now, if you have a system that is written in C in Golang, Java, Python, Ruby, then adding support is really easy because there are reusable libraries or packages available for those languages that you can just drop in and start using it. And in particular for Python, for example, there's also a command line client that you can plug a flow tuple into and obtain the community ID string. So you're not forced to see that in traffic. First, you can just also synthesize those strings and verify them that way. So pretty good basis at this point for a lot of systems. I'll flag a couple of sort of the behemoths in the ecosystem. Surikada has supported community ID since 4.1 is one of the first to support it. Wireshark supports it as well, though that is a little bit more recent. This is as a version 3.4 or if you use development builds, I think it was in 3.2, if I remember correctly. And for Zeke, there's a package for its package manager, ZKG, that is compatible with any Zeke version as of 2.5 and newer and that is really basically covering anything that you should realistically be using at the moment. That is not all. If you go to that GitHub page, you'll see what other sort of systems out there have added support. This is a snapshot that I think is complete as of today. Thank you to anybody who's been involved in adding support to the system. It's really heartening. It is really exciting to see. And you can see that, you know, like if you're at all invested in any of these systems, you can really sort of just like flip the switch and start using community ID and it will pretty much be supported across a lot of your infrastructure in open source. Now, I might you say, no, no, no, this is so simple. This is almost stupid. You forgot so many things. What about, right, let me address a couple of the concerns you might have. So first of all, yes, exactly. This thing is not feature complete. That V1 was very intentionally sort of targeting just ease of implementation and getting a bit of feedback. There's a whole bunch of stuff that we might add for a version two. And one that I think would be really nice to have is basically an ability to make that algorithm configurable in terms of its inputs and outputs. So for example, in this example shown here, like you might want to say, well, in this particular installation, we have included VLANs Q and Q or other sort of like link layer headers. As part of the hashing algorithm, we did not use SHA1. We used SHA256 or we just ditched base encoding altogether. And one idea for this would be that in addition to that version number at the very beginning of the string, we also include like really short sort of basically config strings that just via a couple of letters capture how the algorithm was configured so that you can see that in the string itself and ensure that you're comparing apples to apples basically. Another thing that often is mentioned is performance versus collisions. So clearly, there is a potential here for things to be too slow, say, like you might say that, okay, SHA1 is clearly not one of the fastest hashing algorithms out there you might want to use like murmur two or what have you. And that is true. That might well be true, but it was not clear to us from the outset that performance is such a critical concern that we adopt something sort of less commonly known just to squeeze out extra performance. Collisions, likewise, it gets clear that for two flows that we're seeing sort of far apart in time, or that we're actually present in different parts of the network were semantically actually different flows, you would get collisions. Again, not quite clear that it is big enough a problem to warrant implementing something more complex. We've so far not really heard feedback that this has been prohibitive for anybody. If that is you, if you do find this is barely usable because of such concerns, then let us know and we can reflect that in future implementations. And I just mentioned time, and that is sort of a fun one because intuitively, many people say like, well, but the time is not in that hash. So if I encounter the same flow tuple, you know, an hour apart, then the hash will be the same. And indeed, it would be right now. And we were kind of treading carefully here because if you do relatively straightforward approaches, like, you know, you round the timestamp to some interval, like an hour, a minute, whatever it might be, then you also introduce a risk that different monitors compute that hash value a little bit differently because, you know, the clock in one is just before the end of that window, the other one is just after that window, you get different results and so forth. So I think this is sort of an interesting area for a clever algorithm that incorporates time, but sort of in a robust fashion. And if you have ideas for how to do this, we've honestly not spent too much time thinking about this, then let's hear it. That would be a really welcome contribution. Other basing codings is sort of one that is dear to my heart because I feel that the choice of base 64 was sort of, you know, like again, sort of the obvious one, but actually not that great in that the output is sometimes a little ugly, you know, base 64 and those strings, you know, have lots of sort of funky characters in them. And we've heard some reports that at least in some scenes, you have to sort of go out of your way because some of those characters are considered special and therefore make the use of that string sort of more difficult than it needs to be. And so the obvious contenders would be like base 62 or base 58, even depending on like how many sort of characters you want to rule out. I personally think that base 62 would be sufficient and sort of addresses that problem. But basically, this is another tension sort of length versus sort of possibility. And finally, there is sort of anonymity versus reversibility. It is fact that right now the produced community ID strings are not reversible. You cannot go back to the original flow tuple just knowing the hash. And, you know, if you still sit on the logs that include the community ID, you can locate it and then look up what that flow was. But you cannot do that just based on the string itself. Now, for a lot of use cases, that is perfectly sufficient because you are only ever going to look for string comparison or because you, you know, want to ask your buddies, hey, okay, this flow was nasty, it delivered some payload. Have you seen this too? But it is completely true that in other settings, you might want this to be reversible. And that is not something that is currently addressed by community ID. Like you could maybe just come up with a standard for writing out a flow tuple in a way that is not sensitive to directionality or some such. But it's sort of out of scope right now. All right, so with that, I'm done. That's all I had. Thank you so much. Again, if you want to check out the spec, the status of current implementations or just send some feedback, create some bug reports, whatever it might be, then go to our GitHub page. We welcome all of your feedback. If you are tempted to add support to an existing system or support for a new language, let us know too. Like it's always good to tell people ahead of time so that you don't waste time and effort. But thank you so much. Thank you so much to everybody who has already contributed to it or added it to some system. I really look forward to your questions and I'll see you in a couple minutes. Thank you, guys.
Network security practitioners frequently need to correlate logs and alerts produced by the systems installed in their networks. For example, a Suricata alert might require the context of Zeek's connection logs for the alert to become actionable. Normally the best way to make such correlations is by manually identifying the flow tuple involved, in each of the monitor outputs involved, around the timestamps in question -- a tedious and error-prone task. To simplify this process we're standardizing a straightforward algorithm, dubbed "Community ID" (https://github.com/corelight/community-id-spec), that produces short textual hashes that reliably identify network flows directly at the source. Flow correlation then becomes a straightforward string comparison operation. Popular open-source network monitoring solutions now include support for this emerging standard, including Suricata, Wireshark, and Zeek, and there's a growing library of reusable implementations in various common programming languages.
10.5446/53385 (DOI)
Hi, I'm Marcos Pazzarino-Brunella and I'm a Principal Arduro Engineer and X-Brid. In this video, I'm going to present our work HXDP, Efficient Software Packet Processing at FPG Anex. At X-Brid, we work on redefining the interface between software and network interface cards by bringing our research breakthroughs inside our products. The background of this work is that network packet processing is a bigotis. Just think about 5G deployments and data center networks. Network processing is typically done on commodity servers with general-purpose CPUs which are facing a starvation on their underlying scanning laws. So the tendency is to save CPU cycles for tasks that cannot be done really elsewhere, thus by pushing network processing into dedicated network accelerators which vary in size, power requirements and the underlying technology. Talking about technologies, at X-Brid, we are strong believers on FPGAs. Why? Well, because FPGAs have been targeted as a candidate technology for such network accelerators thanks to their reprogrammability features and the opportunity to combine different building blocks together to create complex network functions. And FPGAs are also being deployed as commodity researchers inside data center servers as machine learning accelerators and in 5G radio access networks. However, the problem with FPGA-based NICs is that to implement even a simple network function, we need to iterate many times through a feedback loop of code, simulation and synthesis before getting the actual bitstream that implements our network function. This is a tedious process which requires a lot of hardware expertise and a lot of time. To overcome this limitation, the research community has divided into two branches. One developing high-level synthesis tools to implement expressive network function faster, but still they require a certain degree of hardware knowledge. And the other branch which focused on developing abstractions such as the match action tables to have a programmable executor on the FPGA. This allows for faster programming, but as a limited or somehow exotic programming model. The common ground of what the approach is, although, is that they assume the FPGA to be dedicated entirely to network processing tasks. Instead, our approach is to safety EBPF infrastructure, which is a packet filter implemented in the kernel, recreate the same execution environment inside the FPGA and offload the EBPF execution to the FPGA. So before diving deep inside our architecture, we need to understand how EBPF works. So what's EBPF? Well, EBPF is an internal virtual machine that executes EBPF by goods. So it's that programs are written in restricted C and are compiled into EBPF by good. Then they are injected inside the system, calling the EBPF Cs call, they are passed through a static verifier, and optionally just in time compiled for the target architecture. And finally, they are attached to one of the many EBPF books inside the kernel. And in our development, we focused our attention into one particular EBPF book, which is called XDP or the Express Data Pack. XDP is one of the many EBPF books and is placed at the earliest point in the stack. It's not a kernel by past techniques such as the PDGA as it doesn't really push CPU cores. Instead, the CPU load scales smoothly with the network traffic load, and it's completely transparent to the host machine. And we can now explore what an XDP program lifecycle looks like. An XDP program is triggered every time a target arrives. And as a first thing, the EBPF environment creates the context data structure for the packet containing all the relevant pointers to packet data and metadata. And after that, the program parties the packet header and then interacts with the host system by means of VAPs, accesses, or helper function calls. And finally, the program rewrites the packet header and takes the forwarding decision, which can be as simple as drop the packet or more complex as redirect the packet to a particular CPU core. And by putting all together, you can see that XDP sits along real nice along with a BPF. So our idea is to take the EBPF with the XDP infrastructure and recreate the very same architecture on the FPGA. We do so by implementing all the relevant VAP types and helper function as dedicated RU blocks. And by executing optimites, EBPF bytecode inside a custom build CPU called Sephirot, thus creating our hardware express data path. And the original EBPF bytecode passes through our compiler and gets optimized for the execution on Sephirot, which will come later. And diving more deeply inside the A checks the architecture. We can see that we have two distinct operational phases. At configuration time, we populate the Sephirot's instruction memory with the optimized EBPF bytecode and we configure the MAPS memory. The system then switches into runtime and every time a packet arrives, it gets selected by the active packet selector transferred inside the packet buffer memory, which creates the context structure for the packet. At the end of this circle, we start Sephirot and the execution of the optimized EBPF program starts. And at this point, Sephirot starts fetching the optimized EBPF instruction from its instruction memory. And as execution goes by, it can call an helper function. It can access MAPS, read and write packet data and metadata. And once Sephirot terminates its execution, it asserts the exit signal and posts the content of register number zero, which is used by the active packet selector to implement the forwarding decision. And talking about challenges in this design, we face two main ones. HXDP occupancy must be sable so that a designer can fit multiple accelerators on FPGA. And the most complex one, HXDP performance must be comparable to the ones of an x86 CPU core, which is clocked at the frequency and order of magnitude high. For the first challenge, we assume that the FPGAs was also used for other hardware accelerators. And this works thus to keep the orders simple by adapting the instruction set architecture to the hardware design. In fact, a superscar approach for the EBPF executor would have been too resort sangry, so we opted for a very long instruction word CPU, moving the instructional level parallelism extraction complexity from the order to the software. We brilliant achieved that. In fact, we've managed to keep HXDP research utilization low, occupying less than 10% of the overall logic on a net FPGA soon we running at 156 megahertz, which is the line rate frequency of its internal data path. Making HXDP fast enough was the particular pain point of this work since an x86 CPU and an FPGA are two very different domains. In fact, when an x86 CPU is tweaked for sequential speculative execution, an FPGA is well suited for massively parallel execution. So how we managed to fill this gap? Firstly, we executed BPF bytecode inside specialized VULAW CPU core so that all the complexity of code polarization is demanded to our compiler. And for the hardware design, we followed simple design principles such as short pipeline stages for poly execution lanes for a BPF bytecode and a constant one clock cycle latency for each of the operations that can be performed inside our data path. And on the code level, which is where we put so much effort, we extended the BPF instruction set architecture to support our custom instructions and we safely removed unnecessary instructions from the applications code. Since we could provide that specific functionality directly in hardware and to illustrate code optimization, we will use a simple UDP firewall with an ENABPF. Here on the left, we are going to see the restricted C of the UDP firewall while on the right, the relevant ABPF bytecode. Here to initialize data structures which are going to be pushed in the ABPF stack, LFDM uses R4 to zero write the relevant location. But providing an already zeroed stack is trivial to do in hardware, thus removing those four instructions from execution, it's safe for us. Here the argument of the conditional statement gets compiled as two different instructions. Well, we can simply create a new class of instruction that supports also the immediate operand along with the source register, thus merging the two operations into one. And this is trivial to implement in order and to recognize at compile time. Another recurrent pattern in ABPF programs is the boundary check on the packet header. Also this can be easily implemented in hardware, thus saving this costly branch instruction. Also, many applications need to access two source and destination MAC addresses inside the Ethernet frame. But unfortunately, ABPF does not provide a six byte load and store and translate those memory accesses into sequences of two bytes load and source. We are able to extend the instructions and architecture by encoding three new instructions for six byte data movement, encoding the new instruction nicely inside the encoding scheme of ABPF. Another important extension must be embedded before wording decisions at the exit instruction, rather than to store it into register number zero and then exit it. This other than reducing the instruction count allows the compressed exit instruction to be detected at the first stage of the SEPHIROT pipeline thus saving the other three clock cycles. And now we can discuss the performance of this optimization. And we tried every XDP code that we had inside the Linux kernel. In fact, we have run a detailed analysis for the single optimization of all the Linux kernel examples, XDP products. And as is depicted in the graph by removing boundary checks and introducing three operands instruction and six byte load and stores provides a significant gain in terms of the instruction reduction. And by putting all the optimization together, we have evaluated the impact on the final output. In fact, by taking the original ABPF bytecode length and applying all the optimization, we obtained the final optimized and parallelized bytecode which will be executed by SEPHIROT. And analyzing the instruction level problems and comparing the results with the EX86 just in front compiler, we can state that EX86 mostly expands code while all our optimization decreases significantly the resulting VoilaW bytecode. In fact, we achieved 2.31 average APC across all the examples which is comparable to the one that an EX86 CPU core achieves at runtime. And if you want to read more, there is a SDI paper we got in November. There is all the details on that. Then we tested our entire system by first running some micro benchmarks of throughput. The performance here is measured in millenpacket per second and troughing his UDP minimum packet size. And the line rate is 40 gigabit per second. We are showing that on a net FPGA zoom. And as you can see from the first two graphs from the left, we are always able to outperform an EX86 CPU core which is running at 3.7 gigahertz. Remember here we are running at 156 megers for our design. In fact, this is astonishing for a simple VoilaW core running at a frequency which is 10 times lower. However, when we access maps, the memory hierarchy of an EX86 system kicks in and we are slid back at 3.7 gigahertz core but still higher than 1 clock at 2.1 gigahertz. We then tested all the Linux kernel examples and it's shown within the graph where Aval's achieved the same performance of an EX86 core clock that 2.1 gigahertz for programs that do not require intense interaction with the host. So maps access and output function calls. And finally, we've tested our architecture with real-world applications such as the UDP firewall with optimization section and Catron which is Facebook's load balancer. And also here we are able to outperform EX86 core running at 2.1 gigahertz. And in terms of latency, we have included also measurements done on the network flow processor from Nizhrenam. And as expected, we are 10 times lower than the next EX86 implementation but what comes as a surprise is that we are still lower than a Nizhrenam card. So this was November 2020. What's new? Well, let's walk the platform. We are actively developing on our Xilinx of the OU50 for this design and we have integrated HXDP inside Garunda which is an open-nick design developed by UCCD. The benefits, well, we bumped from 156 megatons to 250. We are able to exploit the new ultra-round design on the VIRTIC SulfurSkill Plus for bigger maps and if we want huge maps, we can back them with high bandwidth memory which the Xilinx Albeo provides. We have a Tyler-host interaction that tends to the Corundum driver. We are able to pass packets from the data path to the OSD system through the PCI Express. And we have explored also a MediCorps design and we wanted to avoid any kind of interaction between two cores. So we used an example which is a unidirectional NAT, pretty simple, and we decreased the number of lanes for each CPU from 4 to 2. So we prefer to work on data level parlance rather than instruction level parlance just to make more CPU fit inside the FPGA. And we were able to close the design with four CPUs and we are trying to have eight, although it's quite painful. But with four cores, we are 5.6 per the performance that we had with the previous design. In the conclusion, today I've showcased the HXDP, which is the hardware express data path with just recreated the ABPF infrastructure on FPGNX. And the benefits is that we can execute unmodified ABPF programs. We need low hardware resources and we can free up CPU cores with a single performance at 10 times lower latency. And as visual work, we want to dive deep into the compiler optimization to try to extract to squeeze some instruction level parlance from the programs. Also most HXDP programs require party parsing, which is a task that can be safely transferred to a dedicated other block. And we want to move through the billion of entries inside our maps. So we are trying to implement the transferent hierarchy of all the memory resources on the FPGA. And the dream of every hardware engineer to have an ASIC that some fixed functionalities can suit very well an ASIC implementation, for example, separate. So we want to try to put them in such a way. And thanks for watching and I'll be very happy to get questions in the Q&A session
I present a solution to run Linux’s eXpress Data Path programs written in eBPF on FPGAs, using only a fraction of the available hardware resources while matching the performance of high-end CPUs. The iterative execution model of eBPF is not a good fit for FPGA accelerators. FPGA accelerators on the NIC enable the offloading of expensive packet processing tasks from the CPU. However, FPGAs have limited resources that may need to be shared among diverse applications, and programming them is difficult. We present a solution to run Linux’s eXpress Data Path programs written in eBPF on FPGAs, using only a fraction of the available hardware resources while matching the performance of high-end CPUs. The iterative execution model of eBPF is not a good fit for FPGA accelerators. Nonetheless, we show that many of the instructions of an eBPF program can be compressed, parallelized or completely removed, when targeting a purpose-built FPGA executor, thereby significantly improving performance. We leverage that to design hXDP, which includes (i) an optimizing-compiler that parallelizes and translates eBPF bytecode to an extended eBPF Instruction-set Architecture defined by us; a (ii) soft-CPU to execute such instructions on FPGA; and (iii) an FPGA-based infrastructure to provide XDP’s maps and helper functions as defined within the Linux kernel. We implement hXDP on an FPGA NIC and evaluate it running real-world unmodified eBPF programs. Our implementation is clocked at 156.25MHz, uses about 15% of the FPGA resources, and can run dynamically loaded programs. Despite these modest requirements, it achieves the packet processing throughput of a high-end CPU core and provides a 10x lower packet forwarding latency.
10.5446/53386 (DOI)
Hello everyone, thank you very much for joining us today. My name is Fan Zhang, I'm the network software engineer in Intel Shandong R-Lange. I've been working on the crypto database exploration for over 10 years and I've been working on the DPDK and WPP crypto and IPsec for over 5 years. So today's topic is your elephant chocaziller, how to accelerate IPsec elephant flows. So this is agenda for today's topic. First I will briefly describe the elephant flow concept and points out the bottleneck to secure them with existing open source IPsec solutions. So we will bring about our answer to fix those bottlenecks. So the proposal was built on top of FIDA WPP and WPP IPsec, so we will describe that. We will also describe the crypto infrastructure and the engines used on top of WPP IPsec. With the synchronous crypto engine, we will be able to accelerate the WPP IPsec single flow throughput. But to improve the performance even higher, we will describe our ongoing project to scale the single IPsec flow to 100 gig per flow throughput. And in the end, we will recap the presentation with the summary. First what is elephant flow? It's extremely large continuous flow in internet. So it only takes 4.7% of the packets in the internet in total, but when it's active, it will take over 40% of bandwidth as well. Leaving the elephant flow aside, let's look at how the user space IPsec data plan approach nowadays is used to process any flows, securing them with IPsec. First we will isolate and limit the per-core processing resources. Those will be either the CPU core resources or the dedicated hardware like Intel KVT resources. So we're there to make sure every core has the availability to process the IPsec flow. Second we will do a flow to core affinity, which limits a flow is processed by one and only by one core. This helps reduce the maximized cache utilization and also eliminates the core-to-core race condition. Bring back the elephant flow. To secure the elephant flow with this existing IPsec data plan approach, it will be very difficult. The reason for that is first, the crypto processing requires a large amount of cycles. Elephant flow is mostly large packets as large as your MTU codes. Second the flow to core affinity will always make one core maximum extremely busy. The reason for that is you don't have many elephant flow happening in your system at once. So that's the reason other cores remain in relaxing. And a perfect core extremely powerful to handle the flow also means waste the cycle most of time because the same reason the elephant flow doesn't happen often. If you have a big flow coming in, you want to load balancing on this flow to multiple cores, same as the regular elephant flow processing way to with the IPsec, you will have problem because it will cause the race condition multiple cores will fight to update the sequence number of the same security association. So that should need their spatial treatment with a described ledger. To overcome these problems, we propose our answer with a FIDO VPP IPsec. So first what is a FIDO VPP IPsec? With that we have to introduce DBDK a bit. DBDK is a development kit on the UNIX Foundation open source project. So it provides the framework NPI libraries and provides a number of drivers from different vendor to do the faster packet IO. So people want to use DBDK, they have to run the right application on top of DBDK with the libraries DBDK provided or directly use them by using the applications that you're running on top of DBDK, which includes OVS, Tonsten fabric and the FIDO VPP. So FIDO VPP in comparison is the same as the UNIX Foundation open source projects. It provides a different DBDK, it is a network function application. It provides packet processing pipeline. It is configuration driven, composable and extensible. Because it's built on top of DBDK, it inherits the rich libraries and functionalities and the driver supports of DBDK and it has its own native drivers. And the FIDO VPP has a wider protocol support. So for user who needs the support on top of these existing protocol supports, they can write easily, write their plugin code and you're into the FIDO VPP seamlessly. So FIDO VPP is widely deployed on OpenStack Kubernetes and some discrete appliance already. So FIDO VPP IPsec is a very important component inside FIDO VPP. It is open source production value IPsec implementation. It is capable of doing single server dual socket one terabase IPsec processing already on top of the latest Intel IPsec isolaker and the Columbia VellNIC. It supports a wide range of protocols authentication header, ESP and ESP over UDP and ESP over GRE. It supports major crypto algorithms include and it supports multiple crypto engine plugins running underneath. For CPU based crypto exploration or look-aside hardware exploration, VPP can seamlessly remember that in the same machine with no problem. Most importantly, it's efficient and cloud friendly. For VPP 2005, the VPP IPsec is running on top of the native crypto infrastructure. So native crypto infrastructure is a generic infrastructure that provides symmetric crypto service within VPP. So it provides a generic API for user graph notes to consume the crypto capability. The API includes the key management and crypto operations. It has an advantage of performance availability and flexibility, but it doesn't have hardware offload support, which means a cost maximum throughput is a single IPsec flow maximum throughput. You cannot scale more than that. So to scale the VPP single IPsec flow throughput, we can do offloading the crypto. The reason for that is the packet processing for an IPsec package is fixed independently to the packet size, but the crypto is not. So we can flow the crypto workload to dedicated hardware or like an Intel QoT or dedicated CPU cores. Each will have getting more cycles for their IX score to do the packet IO and a snacket processing. However, dedicated hardware and dedicated CPU cores are two different things. We need a generic asynchronous crypto infrastructure to support both. That's why we upstreamed the VPP asynchronous crypto infrastructure in VPP 2005. So it shares the same key management as the synchronous crypto infrastructure. You don't need any more coding on top of that. It provides a generic in queue and the DQ handler. So different crypto engines can plug in their handlers into the infrastructure to support the transaction. So user graph nodes as the ESP in crypto to know graph nodes showing the graph, we will do the in queue their packets. And a dedicated crypto dispatch node showing us the orange node in the graph, we will pull in the DQ, we will continuously call the DQ function to retrieve their processed packets back to the main stream pipeline of graph nodes, like they push them into the graph nodes like ESP for encrypted tunnel post. With this, you can achieve the synchronous crypto way of working inside the VPP. So what we did there first is add in the QoT hardware acceleration with the DQ crypto to the dev, so the DQ crypto to the dev has been well known as one of the most performant crypto infrastructure. So what we find out is the DQ crypto to the dev is the API in the data structure is different than VPP. The cost of adjusting the way of working and the cost of translating data structures is actually cost the process. That's why in DPDK 2011, we proposed DPDK new crypto dev raw API. It has more compact data structure. It supports raw buffer pointer and the physical access input. It helps in the end to gain more 15% performance between the VPP 2005 and the VPP 2009. So it's been officially used as a default crypto dev engine inside the VPP nowadays. What about if you don't have a QoT? We can use multiple CPU cores under the software crypto scheduling engine to achieve that. So what is the software scheduled crypto engine? It is a pure software crypto engine that is dedicated to CPU cores to process crypto workloads. So think about a picture over here. You have the arc thread in the middle, which include the packets into the dedicated Q. The crypto workers, they're all red and blue ones, they will continuously scan in these Qs. So if they find a packet inside the Q, his status is not processed yet. It will update as a walking progress and process them. Once it's done, it will mark the status as complete. So from the beginning to end, the crypto workers won't de-Q any packets, but only update the status. It is the same arc thread running the crypto dispatch graph node, scanning the same Q and the first number of packets that has the status of complete back from the Q, that maintain the packet order. So with this, all three cores can walk hominously and efficiently to one half without each other. So in this design, every core could walk as the IX stretch or as a walker course. It will help each other to achieve their helping each other to gain their maximum throughput. And also, when we were upstreaming, we were thinking about the crypto dispatch node running the pooling mode, which will get the best performance for sure, but it's unfriendly to cloud native use cases. And it will sell a lot of cycles if there's no crypto workload to process. That's why we made it to supporting interrupt mode. So every graph node can signal and wake up from the signal by other threads. So once a package, we do active pooling within our interrupt handling. So these crypto worker call the orange and blue ones, they will try to process as many packets as possible before they fall back to sleep again. And we do a process, syncing one crypto frame is in Q, general processed. So once in Q, we signal the X-ray that will signal the crypto worker one and two. When they finish processing, they will signal the X-ray for the crypto dispatch. So it will help maximize the efficiency. So with the synchronous crypto, we can achieve single IPsec flow for up to 40 gig. And let's see why we cannot get higher? Because even with the crypto offloaded, there's still heavy packet IO process, there's still heavy stack processing left. So to achieve the performance even higher, we needed to offload more workloads to the dedicated CPU cores. But to do that, so it's not a single crypto workload anymore, we need to think about the way that you can do the load balancing and reordering costlessly. So Intel DLB can actually help with that. Intel DLB is dedicated to hardware. They can do the packet distribution and aggregation while maintaining the order from IX to TX. From the picture you see over here from 0 to 5, the packet is ordered no matter how many worker cores in between to share the workload processing. So we have the hardware. How can we utilize it into the IPsec? So first let's look at the graph over here. So for a single package, this is the stages that need to be executed before encrypted packets is going out. This is encryption by the way. So you have pre IPsec including IX packet classification. You have to do a lookup, then you have to update the sequence number. And as I said, this is the places where the race condition happens if you offload it to multiple cores. Then you have to have this add a tuner header plus a CV operation and a synchronous crypto. And in the end, you have the TX. So if we can say, OK, an updated IC number cannot be handled by multiple cores. For a single IPsec flow, we can use one core to do IX only. We do pre IPsec workload, we do a lookup, and then we update the sequence number. Once it's updated, we include them into the DRB. The DRB will distribute them to multiple worker cores. This worker core does the most heavy lifting of the tuner header and ICV and the plus synchronous crypto. Once it's done, it includes the packets back to the DRB, then the packets will be handled by the TX core for the post IPsec processing. With this, we should be able to get a single IPsec flow up to 100 gigabits per second. This is ongoing. We estimate to finish an upstream by the end of 2021. OK, the summary. So today we talked about the VPP synchronous crypto infrastructure, which is performance, but failed to scale. So we provide the way of scaling the IPsec throughput single flow by using the by offloading crypto workload to dedicate the CPU core or dedicated the hardware accelerator. With the asynchronous crypto engine, we can achieve 40 gig IPsec elephant flow processing. To scale the performance even higher, we can utilize the Intel DLB or the DQEvented Dev to achieve that to offload most of the workload to the walkers. And that's the end of my talk. Thank you very much. And let's back to host. I'm going to go back to a question that Vincent Jarada asked earlier. I think it was on the previous talk. Whenever you use asynchronous crypto worker cores, how do you maintain packet ordering? So the idea it works is every let's say every call at the contribute processing their packets instead of crypto. It's acting as a producer core, so it will include the crypto ops into this queue it owns. So the crypto worker cores other than the producer core, if they are voluntarily processing the crypto core for the producer core, instead of de-queuing from that queue, it will only update on queue objects status inside. We are using atomic operation. So yes, atomic operation will cost some cycles, but it's compared to the RX and TX efficiency. Let's say it's not that big. And those worker cores, once you finish the processing, that's your bunch of packets inside these queue objects, each will again update the status. This time, instead of using atomic, it directly writes that to say, hey, I'm done. Then the same thread that in queue the packets, we will scan the queue from the first object to the last object and finds the first n number of queue objects that has the status of done and de-queues them. So the ordering of the packets is actually naturally resolved. So it never actually leaves the queue. It always stays on the same queue. So there's no concern there. So at this juncture, I'll just say we have a minute left. So if you have any questions, additional questions for his fan, because I'm conscious we're about to run out of time, you can switch over to the hallway discussion and the link for the hallway discussion will appear momentarily. So I don't find it, if you want to tackle the second question, which was, what's your future plans?
Elephant flows appear irregularly, can consume almost half of the available bandwidth and are consequently associated with a host of issues. Securing elephant flows with IPsec is a well-known challenge to SDN and SD-WAN solutions on commodity hardware. The key problems for those developing solutions are: - How to seamlessly enable dedicated HW to accelerate IPsec processing when available? - How to distribute workloads to more CPU cores and maintain packets ordering to scale? - How to scale up/scale down the computer resource usage when the elephant flow appears and disappears? In this talk we will discuss our recent work done on open-source project FD.io/VPP to address the above problems. We will describe how we utilized and enriched the VPP architecture to accelerate on-demand IPsec elephant flow processing in a unified and seamless way.
10.5446/53387 (DOI)
Hi, welcome to this talk about the intersection between SD1 and Cloud Native. I'm Rory Hewko. I'm a research and development engineer with Cisco working on cloud technologies and how they apply to enterprise networking. And today I'm going to introduce you to the Cloud Native SD1 project. I will start with defining what SD1 and Kubernetes are for the purposes of this presentation. Then go on to describe the architecture of the Cloud Native SD1 project, continuing with an example deployment, and finally, let's dive in. SD1 is a networking technology to connect different locations over any kind of network transport running different services. Basically, the idea is to secure communications between sometimes it's different buildings of the same company, sometimes it's workloads located in the location center, and having a centralized pane of glass for both orchestration, analytics, and control. These days, a lot of enterprises are running software-defined wide area networks, especially with companies with multiple branches and connect these branches securely to their headquarters. Also, now with the pandemic, this SD1 concept is being a little bit muted, and there are efforts bringing lightweight SD1 with zero trust to actually the home office of the employees. This can be many things to different people, or the purposes of this talk. What is important is that it is a declarative container orchestration system, with the emphasis being on declarative. What you can do is when you have an application that has been decomposed into microservices deployed in a Kubernetes cluster that is basically a collection of physical or virtual machines, and run these microservices as containers in a declarative way, meaning you specify to the system that you want these many replicas of this particular microservice, and then the system itself, it will take care of how it distributes it between different nodes. For this talk, you will see that we are going to use this declarative approach of Kubernetes to apply the way network traffic or different services is going to be optimized over SD1. Today, SD1 and Kubernetes are like ships in the night. They don't know anything about each other. Basically, the net ops, which network operators are responsible for deploying the SD1 for a enterprise. They have the access to the control plane for the SD1. They configure the services, connections, and so on, but they don't know what the DevOps are deploying in the Kubernetes clusters, what kind of applications, what are the requirements of the applications. There is some level of optimization that they can do trying to discover what kind of application is running, but it is not as granular as it could be. What the CN1 project is attempting to do is to integrate the SD1 world and the Kubernetes application world so that there is an easy way for the SD1 to detect different types of applications, their network requirements, and optimize traffic based on that information. Let's dive in how this is going to work. As I said before, the net ops are going to configure the SD1 through the SD1 controller while DevOps are deploying services, applications on Kubernetes. These services are typically described in YAML files and applied to the Kubernetes cluster, and then the orchestrator is able to deploy these containers and bring up the services on the Kubernetes side. In order to make the SD1 aware of the type of applications and their requirements, what we are proposing to do and we are doing in the CN1 open source project is that DevOps should annotate services with the different metadata about that application's network requirements. One of the components of the project is the CN1 operator. It is a component that is following the operator pattern for Kubernetes, and the way it is implemented is that it is constantly watching for annotations for services based on certain patterns that can be configured and is then making this information available in a service registry. This service registry contains information about the services where they are deployed, which endpoints identified by the NIP address at the port, and the metadata that was added as an annotation by the DevOps. The automation here is basically that the DevOps declaratively only have to attach the metadata to a service, and then the CN1 operator detects on which endpoints and at which port that service is available, registers this information into the service registry, and then the second CN1 component, the reader, is continuously watching for new information in the service registry. Whenever a new event happens, meaning that a service is either added, updated, or deleted, the CN1 sees this event and forwards the necessary information to the third component of the project, which is the CN1 adapter. Together with the reader, they form the CN1 adaptation link, and adaptation in this case means that they adapt this information to the particular APIs exposed by the CN1 control. Right now, the project supports the Viptela CN1, and we are looking also to develop support for Meraki, but the whole architecture was designed with the idea that the adapter can be easily replaced and have specific adapters for different CN1 controllers. The interface between the reader and the adapter is clearly defined and is basically passing over these events that the reader detects in a service registry. The adapter then maps these metadata that was configured on the DevOps side to policies that can be applied on this SD1 controller. In an SD1 controller, the NetOps already developed and configured different policies for the whole network, and they can integrate the information they received from CN1 into the policymaking pipeline. Before the system is started, they agree with the DevOps on the type of metadata that they will be accepting, and then they define how they will map those to actual SD1 policies. This will be made clear with an example. To illustrate the way the CN1 project works, we will consider a radio conferencing app which is decomposed in four different services, each of these services with different types of traffic characteristics, as you can see here on the slide. The NetOps then define using the CN1 adapter four different types of labels which the DevOps can then use to annotate these microservices. In this case, they use real-time streaming, file transfer, and best effort, and agree with the DevOps that every day deploy an app, they can use one of these labels for the purposes of SD1 optimization. For example, real-time can mean that the latency should be minimized, and also packet loss should be minimized. Streaming would mean that the SD1 should try to find the lowest latency with the highest bandwidth, file transfer means that we want the highest bandwidth and minimum drops, and then whatever is left goes to best effort. Then the DevOps annotate the apps with these labels provided by the NetOps, and whenever these annotations happen, the CN1 operator picks them up. It is published then in a service registry from which the CN1 reader reads the information about all the endpoints implementing each of these applications and their tags. The NetOps then define the policies for the SD1 for each of these tags and applications. That may mean choosing between different tunnels based on SLA, static configuration, or other possibilities that the particular SD1 implementation allows. So, this way, all the traffic from these different applications can be optimized in a way that is best for the particular traffic profile of that application. So, let's show how all this works in practice with the demo. For the demo purposes, we chose a simpler application with a single microservice that is doing video streaming only. We set up a Cisco SD1 where we have a Kubernetes cluster in cloud serving this streaming video application and then in a branch a host trying to watch this video over the company SD1. The branch has connectivity over two different links. One is direct internet access and the other one is an MPLS circuit, both of which have these specific labels on the SD1 called public internet and busy internet. In order to show this demo, we introduced artificial limitation on the public internet tunnel between the two SD1 routers that limits the bandwidth to 5 megabits per second. Okay, so let's go through the different stages of the CN1 walkthrough. As we said before, the DevOps start with a YAML file describing the service they want to deploy. And for the purposes of this demo, which was a streaming video application, we create a namespace for it to separate namespace. And one of the labels that we apply is CN1 operator loud because the way the operator is configured is to only watch for services in the namespaces there specifically. This is useful to exclude namespaces that we are not interested in optimizing or are not supposed to be optimized. Then we define the deployment of the streaming video service. We ask for two replicas and we will use a container called streaming video service. This container is just a VLC application streaming a video file over this container port 88. We then define a service here. We deploy it into the streaming video namespace. We ask for load balancers since we are deploying on GCP, it is very simple. We just ask for an internal load balancer type. We explicitly specify the IP address of the load balancer so that it makes things easier for us. And then this is the annotation that we were talking about and that is specific to the CN1 project. We define traffic profile metadata with the value standard. This means that this application gets the best effort treatment. I already applied this YAML filing, it is already running in a cluster on GCP. Here are the namespaces we have in the cluster. We have the streaming video where we specify that the CN1 operator should be watching this. We have a separate namespace for the CN1 operator. We have the Kubernetes system namespaces and the default on which we don't use for this demo. We have this one deployment for the CN1 operator, one replica and one pods. As we ask for the streaming video service, we have two replicas and two different pods. And the streaming video service is annotated with the standard traffic profile. The service registry we use is Google service directory. In the service directory, the CN1 operator created a new service which will show up here with the namespace specified, which is mapped from the Kubernetes namespace. And the annotations that were discovered. This owner CN1 operator is just there so that it knows that if there are existing services here, it should not touch those, only those ones that have this annotation. And then the operator also discovered one endpoint where this service is available, which has the IP address of the load balancer and the port a. Once the reader picks up this information, passes it through the adapter, it will be applied as policy to Cisco Vmatch. In order to be used as policy, it has to be mapped according to rules by defined by the net-outs. In this case, we already defined some of the rules. So whenever the adapter sees the traffic profile key and the value is standard, then it instructs this SD1 controller to use this policy, which is called use public internet. This policy just says that regardless of any other thing, use the tunnel that is called public internet between any kind of SD1 edge routers. And if the traffic profile is video, then use the business internet tunnel. This is a mapping that is defined by the net-outs. And then whenever the adapter receives an event containing these specific metadata keys with these values, it will extract the endpoint, the IP, and port, and will create the necessary policy in the SD1 controller. Let's take a look. This is a dashboard of the Cisco Vmatch SD1 controller, where we see that we have two sites with full-WAN connectivity. We'll go to the policies and ask for a preview. And here we can see some rules defined that if the traffic matches this source port and the source IP, or this destination port and destination IP, then use the public internet tunnel. T-Lock is the tunnel locator and same policy for destination. On to the client. So we have the video playing, and it is limited to five megabits per second. As you can see, it blocks every now and then. And sometimes there are also artifacts happening right now. So in order to improve the situation, we decide that we want to change the annotation from standard to video. Now we can edit this YAML file and reapply it, or we can just use kip-cuddle-annotate and we will say traffic profile video in the namespace streaming video. As you can see, it has already been updated in the puddle-nets cluster. And the operator picks it up and publishes this into Google service directory from which the reader picks it up, passes it over to the adapter, and then the adapter installs an updated policy. As we can see, there is a CLI template configuration right now going on because of this. And in the preview, we can see that this match has now been updated to use the business internet link on the SD1. Because the business internet link is going to be used that has much higher bandwidth, we can see that now the bandwidth increased quite a lot and is not limited at 5 megabits. It depends on the bitrate of the video itself. We peaks up to over 10 megabits and we can stay for a while and see that there is no artifacts or choppiness on the video anymore caused by belowband. Also in the B-manage application, we can see that the traffic has shifted from the business internet link to the business internet link. And that concludes the demo. To learn more about the Cloud Native SD1 project, please visit our GitHub organization where you will find the source code organized in different repositories, the operator, the reader, the adapter. We also have a separate repository for documentation and one repository with automation script that will help you bring up this setup. You can contact us at cny.csco.com email address. We also made available a lab on the Google Quick Labs Learning Platform. If you want to take this lab, you can head over to the website and search for the Cisco Cloud Hub solution. It is very similar to the demo that we presented here. You will need seven Quick Lab credits, but if you want to take the lab, just contact us and we can give you some tokens for Quick Redits taking the lab. There you can repeat most of the steps in this demo at your own pace, play around with the APIs and see the details of the implementation line. And we are happy to welcome you in the Q&A after this session. Thank you very much for your attention.
Kubernetes is becoming the platform of choice for more and more application developers. As applications become more complex and more distributed, they may span multiple Kubernetes clusters, or a combination of Kubernetes and on-premise workloads. While internal traffic within a Kubernetes cluster is handled by the CNI plugin, the external traffic between these workloads, or from workloads to end users, is often carried over a Software Defined Wide Area Network (SD-WAN), which is used for traffic optimization. The Cloud Native SD-WAN (CN-WAN) open source project was created to help SD-WAN deployments to identify Kubernetes applications and optimize traffic based on application requirements, thereby bridging together the DevOps from Kubernetes' cloud native world with the NetOps from the SD-WAN world. CN-WAN enables developers to annotate their applications, specifying the type of network traffic generated by the Kubernetes workload, and this information is then published into a service registry. The NetOps configuring the SD-WAN can take these annotations and develop network optimization policies with the clear knowlegde of the traffic type they intend to optimize. Join us for this presentation, where we will describe the components of the solution, the interfaces between the components, and how you can adapt this solution to different SD-WAN products and service registries.
10.5446/53392 (DOI)
you my name is Tashami Hoja and I'm a computer engineer from Albania in this workshop we'll see how to use the wire guard VPN with the help of a Docker container. More details about the steps that are described in this presentation can be seen in this blog using wireguardvpn.hoja.fs.al. If you have any questions feel free to ask them in the chat and I will try to answer them by the end of the presentation. Wireguard is a simple fast and modern VPN that utilizes state-of-the-art photography. It is quite flexible and can be used in many situations. It has some very nice features and you can learn more about it at its own webpage. We will install a wireguard server with Docker and Docker scripts. Installing Docker is easy. It is done in the usual way. Installing Docker scripts we install its dependencies gitmake and m4 and then we clone the gitlab airport. In this directory of Docker scripts ds then from this directory we run make install. Then to install wireguard container we run this command ds pull wireguard ds init wireguard at wireguard. We go to the directory var ds wireguard. We may modify settings and then ds make. While we modify the settings we can see these options. Routed networks, DNS servers, low network internet access, client to client and keep live period. Routed networks are the networks that the client will route through the wireguard interface. In this case all the routes go through the wireguard interface since it is 0.0.0.0.0.0. DNS servers, these are some default values but you can set your preferred DNS servers here. A low internet access, yes. In this case we want the clients to access the internet through the wireguard server. So the wireguard server is behaving like a nut server for the clients providing them internet access. Client to client, no. We don't want the clients to ping or to access each other. Keep live period 0. No keep live packages will be sent by wireguard. Then after installing the server we can use it like this. First we create configuration for each client. For example for client one with this IP, for client two with this IP, ds client add, client one, client two, etc. Then we need to share the client configurations and to share them we use the command ds share. There are several tool or three methods to share the client configuration. In this case we are using www. So that the clients can download the configuration file with the command like this. Then on the client side we need to install wireguard itself and then we can test the configuration with this command wireguard quick up client configuration. This command curled if config.co shows us the public IP and in this case it should show the public IP of the VPN server. And then we shut it down, wireguard quick down client configuration. So we intend to make a configuration of the VPN connection permanent. We copy the configuration to directly etc wireguard, then enable a service and then start this service. Some cases where this wireguard server can be used are this one. Securing connections to the internet creating a virtual private LAN, routing between remote private LANs and accessing clients from a cloud server. Let's see them one by one. Secure connection to the internet. In this case we have a laptop or a smartphone and we access the internet for example in public wifi hotspot. And to make the connection to the internet secure we use the VPN tunnel so that hackers cannot see our communication to the internet. And then we have a virtual private LAN. In this case there are clients 1, 2 and 3. They don't have a public IP but we want them to communicate with each other as if they are in a private LAN. Each of these clients can create a tunnel with the wireguard server and through this tunnel they can communicate with each other securely. The settings in this case are like this. This is the network of the private LAN. Allow internet access now because we don't want the clients to access the internet through the wireguard server. They access the internet through the default gateway that they have. Client to client yes because we want the clients to communicate with each other in this case and keep alive period. We set the value for keep alive period because we don't want the connection between the client and server to expire. Because if the connection between client one and server expires then the client two cannot access client one. The third case is routing between remote private LANs. This case is similar to the previous one in the sense that client one and client two can communicate securely with each other through the wireguard server. But in this case also the other clients in the respective LANs can also communicate with each other. In this case we have to set routing for example if client one want to communicate with client five then it should route through client one. So the request will go from client one to from client three to client one and then to the server and then to client two and then to client five. Use case for accessing clients from a cloud server. Since the wireguard server is in the Docker container we can install another Docker container in the same Docker virtual LAN and we want from this Docker container to access the wireguard client. And also the wireguard clients to access the Docker server. This is an example that explains the last use case. In this example we have Waka Moli server somewhere on the cloud. The students access the Waka Moli server and then the Waka Moli server can access computers in a school network using a wireguard VPN tunnel. And more details about incrementing such a use case can be seen in this blog accessing computer labs remotely. First of all we need to install Docker. I've already installed Docker. We also need to install on the server Docker scripts. First we install the dependencies. Then we clone the repo from GitLab. We want to clone it on opt Docker scripts. Then we go to this directory that we just cloned the script. Now we have installed Docker scripts. Now we are going to install the wireguard container itself. We get the scripts for the wireguard, the DSPool wireguard. We initialize the directory for the container. DSPool wireguard at wireguard. Now we go to the directory that we just initialized, which is this one here. Here we see the settings. The default settings are okay for the tests that we are going to do. So we don't have to change anything. Then we type DSMap to make the container. Now that we installed the wireguard container we need to create client configurations. We do it with DSClient, DSClient add client name and then IP. Let's create a second client. We will command DSClient alias. You can see the list of clients, but we can also see them on the directory clients. These are the configuration files for the clients. To transfer these configuration files to the clients we can use the command DSShare. Now we can send to the client this command so that he can download the configuration from this URL. Let's go to the client. We have downloaded the configuration. To test the VPN connection through this configuration we use wireguard quick up and then the configuration file. This is the IP of the VPN server. We are accessing the internet through the VPN server. Let's shut it down. Let's check again the public IP. Now it is our public IP, not the VPN. It is a different one. To make the configuration permanent we copy this configuration file to the wireguard configuration directory. Then we set up a service. Enable the service. System control. Enable. Wireguard quick add client one. Then we start it. Let's check the public IP. It is the IP of the VPN server. Let's stop it. Let's check again the public IP. It is a different one. If you have any questions feel free to ask. Thank you for your attention. I think the live stream should have started. Thank you for presenting here. We have a few questions from the audience. First one is from Ray. Is wireguard running in the kernel or is it in user space? I don't know if you are familiar with this. My answer may not be the definitive. As far as I know wireguard runs in kernel space. This is one of the differences between wireguard and open VPN. It is faster and more efficient. For some reason because the packages don't have to travel between user space and user space. As far as I know wireguard runs in kernel space. What are some of the other differences between open VPN and wireguard? I just told that open VPN runs in user space and wireguard runs in kernel space. Another difference is that wireguard is recent development. Or at least it has reached stable status just recently. Maybe one year or so. Open VPN is much older than that. Another difference is that the implementation of wireguard is much smaller than the implementation of open VPN. It has much fewer lines of code. For example, just 3,000 lines of code. This makes it easier to check and to see how it is working. It is not possible to check a software that has 50,000 lines of code. But a software that has just 1,000 or 2,000 lines of code. It is easier to audit or to see how it is working. This is another difference. Another difference between wireguard and open VPN. I can't remember any other differences. Wireguard is a simple implementation. It is like a building block. It doesn't offer a lot of features. It is much flexible and can be used in many situations. For normal users, it has to be wrapped by some other application in order to make it easier to use. Wireguard has less options. Wireguard is a simple implementation. It has to be wrapped by some other application in order to make it easier to use. Wireguard is a simple implementation. It has to be wrapped by some other application in order to make it easier to use.
WireGuard is a simple, fast and modern VPN that utilizes state-of-the-art cryptography. It is quite flexible and can be used in many situations. In this workshop we will see how to install a WG server with docker-scripts, some of the usecases supported by it, and we will test/demonstrate a couple of them.
10.5446/52718 (DOI)
Hello, I'm Otto Kekalainen and I am the person responsible for the Debian packaging of MariaDB and related software. And I'm responsible for the version of MariaDB currently available from Debian and Ubuntu distributions. Today, I'm going to tell you what happens after a new upstream release is out and before it becomes available for users in Debian and Ubuntu to install or upgrade to. There is quite a lot of quality assurance going on and I hope that the users of Debian and Ubuntu would appreciate all the vetting of new releases that is done on their behalf. And I'm sure quality assurance in Debian is also interesting for MariaDB developers as a source of information that can be used to improve the MariaDB server itself. I will publish these slides on Twitter during this talk. So if you're interested in Debian and MariaDB, please follow me on Twitter. So first of all, why is Debian relevant? Debian and Ubuntu are among the most popular Linux distributions and there are probably millions of users who start their journey with MariaDB by running up install MariaDB server. The relation of Debian and Ubuntu is that Ubuntu is downstream of Debian and builds upon everything that is in Debian while adding many customizations and some components of its own. Ubuntu has also its own release cycle and support periods that are independent from Debian. For MariaDB, this means that new versions of MariaDB go first into Debian and from there they automatically flow to Ubuntu. So what's the state of MariaDB currently in Debian and Ubuntu? The current Debian stable release buster was shipped with MariaDB 10.3. The next Debian release bullseye will be shipped with 10.5. That will happen sometime in 2021, probably in April or May. Galera 3 is in buster and Galera 4 will be available in bullseye. Galera 4 is actually already available in buster backports and stretchbackports. 10.5 might become available officially via the backport system as well in the future, but you can naturally install it for any distribution version from the merdib.org repositories as well. The Oracle MySQL version has not been available in official Debian releases since Debian. It does exist in the Debian unstable archive, sort of considered unfit for release permanently, and thus it will not be included in stable releases of Debian. The version of Oracle MySQL unstable was 5.7 for a long time, and since last fall version 8.0 has entered Debian unstable. There is also a meta package MySQL defaults that dictates what is the provider of MySQL compatible database. And in Debian the release team decided to default to MariaDB and only MariaDB, while Ubuntu defaults to Oracle MySQL, but MariaDB is also available there, for users can choose in Ubuntu. So here's the timeline of MariaDB and MySQL releases and how they relate to Debian releases. As you can see, Debian 8, codenamed Jesse, shipped both MariaDB and MySQL, and from Debian 9, codename Stretch onwards, the official Debian release has only included MariaDB. Also note that as MariaDB releases are almost annual and thus more frequent than the Debian releases, we have the situation that users of current stable Debian will skip 10.4 and jump directly from 10.3 to 10.5 when the upgrade from Debian 10 codename Buster to Debian 11 codename Bulse i will happen. If you want to check out the state of each package and their respective versions, you can at any given time go to packages.debian.org or packages.ubuntu.com and look up the package MariaDB server to see what version is in what release. All right, next we look how the Debian packaging process actually works and what happens after a new upstream release is available. So first of all, Debian packaging is done in a separate Git repository at salsa.debian.org which is a GitLab instance Debian uses. This is completely public and you can check it out yourself. In Debian there's a tool called Usecan that automatically checks for new upstream releases and if one is available it can automatically download the new upstream source package and import that into the packaging repository in a single commit that represents the new upstream version. Note that in Debian all the packaging work goes into the Debian directory and the surrounding code is kept in a pristine state. Also note that in Debian there's an extra revision number that comes on top of the upstream version string. This is useful as Debian revision number will grow each time a new Debian release is made and bug fixes and patches improve upon the same upstream version. And for software to get from the Debian packaging repository into the actual Debian archive a Debian developer needs to upload it. There are about a thousand Debian developers in the world currently and only they have the power to do uploads. So contributing to the Debian packaging repository is open for anybody but changes to go into a Debian developer needs to approve and upload it. And for Mernieby, Galera and other related packages the Debian developer is me. So this picture is probably too small for you to see but you can check out my slides afterwards and the details of this image. But the main point here is to illustrate that in Debian all new software and upgrades to existing software go into the Debian unstable archive first. Once the package has been in the unstable for a while without anybody filing severe bugs against it and all the quality assurance systems seal the green light for it, the package migrates to the Debian testing archive. Then about every second year the testing archive enters a so called freeze period and then only bug fixes for existing software is accepted. And once deemed fit for a release a new stable release is announced from that frozen testing. And then after that the unstable and testing cycle will then continue to roll again until a new freeze and a new stable is about to be made. There's also a release flow outside the unstable testing cycle and that are the urgent security releases which can go directly into a stable release. There are also so called stable updates as a channel for less time sensitive security updates and important bug fixes. But those stable updates are quite limited and keeping stable is a holy principle for Debian release managers and they really don't want to take risks about it. Security updates are also the only case when we do direct uploads to the Debian archives sorry Ubuntu archives. Normally we do that. Everything goes to Ubuntu first. From a quality assurance perspective the security releases are challenging as it bypasses most of the usual quality assurance processes but we still manage it because in Debian we keep a small delay to allow some time for basic quality assurance to happen and there's actually been several cases where we've delayed making security release for a couple of days and then in upstream they've noticed that the new security release broke something and then we've just passed that one and took the second security release soon after that. Now Debian is notoriously known for being quite strict on many things and it's based on the Debian policy. Linux distributions and systems administrators love stability and standards and the Debian policy exists to ensure a minimal quality for all packages in Debian. In addition there's also a lot of guidelines and other documentation you need to follow. Some of these policies are about decisions done in Debian but I would argue that most of it's pretty generic software quality requirements and following the Debian policy for any given software usually means that the quality will improve and it's less about having to follow some random conventions. Although there are a couple of Debian specific conventions in there as well. And in fact the process to become a Debian developer consists mostly of that you need to prove that you know the policy and that you're able to consistently follow the policy for several years in your Debian packaging work so that you can become a Debian developer with uploading rights. The Debian policy is something humans are expected to follow and to support it there are also quite a lot of automatic quality assurance. All of the quality assurance systems in Debian are public and we encourage people to check them out. There are many things to be found regarding software quality that is useful and applicable also in the upstream project as well. Here are some links of the overview pages and if you follow me on Twitter you'll see the link to the slides now and can open these links yourself. Here's a screenshot of the packaging overview for a maintainer. In Debian there is a MySQL team that covers both MySQL and Merdb and related software. I won't go into the details here this is just to illustrate how it looks like. Here you can see the tracker page for Merdb 10.5 in Debian. There are listed things like what versions of this is where, what binary packages are built from this source package, when was which source package uploaded to Debian, what is the status of the package, can it migrate from testing to unstable, are there some serious issues detected, how many bugs does it have and so on. This is basically a kind of portal with a lot of links leading to other pages with more information about the quality of this package in Debian and there's also some links to the package status views in Ubuntu as well. So let's have a look at the quality assurance systems themselves and the first thing here is the build system overview. This is the first step that runs after a new source package has been uploaded. Debian has currently 10 official architectures that the stable release will be available for and in addition 13 unofficial architectures. Currently Merdb builds and passes the basic test suite for almost all of them and what you've seen here is a better state than what any previous version of Merdb and MySQL has ever had even though it's not fully perfect yet. But really for such a big piece of software being fully perfect on all architecture is probably not a feasible goal. There's also a second build system, the reproducible build system. So this is a quality assurance system to ensure that if you make two builds of the same source code the resulting binaries should be identical bit by bit. Reproducible builds is important in being able to secure the supply chain of open source software. If somebody would inject backdoors into the binaries then they would risk being detected as the binary fill file hashes would start to deviate from what others who have built the binaries independently see. So naturally such a supply chain security does not exist for closed source software as there you can hide whatever backdoor at any time and nobody can verify anything. But in open source you can actually do this. So we should definitely make sure that our software can be built in a reproducible manner to support this security mechanism. Basically the MeriBee server itself does build reproducibly but the ROXDB and Moongop plugins don't and if any of you know this topic and want to help I would be very glad to see this finally fixed. This reproducible builds website has a tool called Difascope that shows exactly what differs in the build files from build to build and fixing it is just a matter of tracking down what source code line produced those files and those changes in them. Debian also has its own continuous integration system and it's based on the Outer Package test scripts. This is not the upstream test suite but a custom one for Debian and if you want to check it how it looks like for MeriBee see the Debian-tests directory. This CI runs for every upload of a new source package like you would expect and in addition it also runs every time a dependency of the package updates which is very useful in detecting if a particular update of a dependency breaks the software that depends on it. So in this screenshot you can see how the MeriBee 10.5 tracker page looked like recently after a new upload of MeriBee 10.5 because it triggered a lot of Outer Package tests in all the software that depends on MeriBee. This is something that is almost not done at all at upstream and very important source of information about MeriBee's compatibility with previous versions of itself. At some degree also a verification that it's still a drop in replacement for Oracle MySQL in practical use cases. Ubuntu has this exact same system which runs the same Outer Package test. The only difference is that it tracks the packages that land in Ubuntu archives instead of the Debian archives. This way Ubuntu ensures that everything included in their own archives is kept in a working state. Next up is PiuParts. This is a system that tests installs, upgrades, removals and in general the package lifecycle. This does not that much run the binaries in the packages but focuses on verifying that the Debian maintainer scripts work as intended, that packages upgrade and that for example that a package is installed and uninstalled it behaves correctly and doesn't leave any craft behind. Then there's Lintian. This is a very extensive tool that contains thousands of small tests ranging from checking for typical spelling errors in source code to checking that the symbols of built shared libraries are exposed and versioned correctly. This is tailored in particular to check that packages follow the Debian policy but many of these things it can detect are about general software quality like the examples I just mentioned. packages in Debian strive to be so called Lintian clean which is quite feasible since most of the things Lintian can easily detect are also very easy to fix. Meridib is not yet Lintian clean but it could be if all of my pull requests at various upstreams such as Meridib, RocksDB and Galera and others would be merged and included in their respective next releases. In addition to Debian's standard quality assurance systems we also have some Meridib specific testing going on. On Debian's GitLab instance we use the GitLab CI features to run testing on every commit to the Meridib, Galera and other packaging repositories. This version of GitLab CI that includes the standard Debian quality assurance is called Salsa CI and we have extended it to also run several Meridib specific tests such as testing upgrades from pretty much all previous versions of Meridib and MySQL. We build test binaries against the Meridib connector C to verify that the libraries work and we test running encrypted TLS connections to verify that all the software that's needed there is interoperable and many other things. This is fully public so you can follow the links in the slides and learn about the details. We also have the users in Debian who are contributing their share of quality assurance by filing high quality bug reports. The Debian bug reporting systems are funny in the sense that they are quite hard to use which has its downsides but the upside is that if somebody does file a bug report then they are usually quite well researched and actionable and many bug reports even include patches to fix the issue which is pretty nice from a maintainer's point of view. Here you can see the statistics of Meridib 10.5 in Debian and it's pretty stable. We haven't had any new bugs for a couple of weeks now and I think it's mostly attributed to the Salva CI tests which without most of the upstream regressions before the software lands in the hands of users. Ubuntu has its own bug reporting system launchpad.net and unlike the Debian one this is graphical and easy to use, maybe a bit too easy to use and most of the bugs filed here are just closed as invalid and won't fix. Although we need to keep an eye on this as well as an additional source for user reported information. So I showed you there can be multiple Debian revisions of each upstream version and there are quite a lot of fixes done in Debian that are uploaded into Debian and verified there and we do our best to not diverge from upstream and therefore all improvements that have proven themselves in Debian are eventually submitted upstream most notably to Meridib itself but also upstreams of upstreams like RoxDB and Mruunga. Here are a couple of examples to illustrate this. This way all users in any Linux distribution will benefit from the quality assurance work done in Debian. As you now hopefully have learned there is quite a lot of work being done in Debian. It's not just about packaging Meridib but actually quite a lot of release engineering and quality assurance to ensure that what lands in the systems running Debian will install, upgrade and run smoothly for years and years. Also note that it's not just about working with the latest major release of Meridib but we always have in parallel the older versions in maintenance as they are shipped in stable releases. If you're using yourself Meridib in Debian or Ubuntu I recommend you start contributing to the package maintenance effort so you can on your own part ensure everything will run smoothly in the future as well. You don't have to be a developer to contribute, you can also greatly help by try as in bug reports or enriching the bug reports so that the issues are pinpointed and easier to fix. If you have packaging improvements as code you can send them as merge requests on salsa.debian.org and naturally if your contribution is useful for Meridib in general then I recommend you submit them directly upstream. Whatever you want to contribute with the recommended first step is to join the packaging team mailing list in Debian and announce yourself there so we can start working together. Feel free to reach out also on social media and as I said the link to my slides is already available on Twitter. See you there.
All about MariaDB packaging in two of the most widely-used Linux distros, Debian and Ubuntu, including the strict requirements demanded by distros, and the impact on fixing bugs “upstream” in MariaDB itself.
10.5446/52712 (DOI)
Hello, welcome all. Today we are going to talk about migrating MariaDB cluster to ARM. So we will try to first understand is it worth migrating MariaDB cluster to ARM and if yes, then what are the challenges or how you can do it and what challenges you would face. So with that, let's get started. My name is Trinal Bauskar. That's quick intro about me. I have been working in MySQL space for more than a decade now. In past, I've worked with multiple organizations like Percona, Oracle, MySQL or Yahoo Labs, Kickfile, Terrarita. I have been currently working with Huawei as part of the open source DB group where I'm driving the MySQL on ARM initiative, which includes all kinds of open source databases. And we are trying to make the complete ecosystem optimal for ARM. And I do blog. That's my blogging site. So with that quick intro, let's jump to our today's agenda. First, we will try to see growing ARM ecosystem. Then we will try to understand what is the state of MariaDB on ARM. And then we will jump to our main topic of why consider migration and how to migrate. And of course, we'll talk about open issues and challenges. I'm sure you have all heard about ARM. ARM processors have been widely used in different verticals, the primary being mobile and network equipment. But they are very commonly now found even in smart home appliances, automobiles, defense and space equipment. Something which has started gaining traction is a use of ARM processor in high performance computing. Now all this was made possible in last couple of years. The ARM instances was made available easily through cloud provider. Huawei offers ARM instances, Amazon offers ARM instances on cloud or Oracle plans to offer it. Apple does offer it in form of desktop through their new M1 chip. Microsoft, there was a news that Microsoft is also working on the ARM chips. So the advantage of ARM processors has been they do consume lesser power thereby reducing your total cost of ownership. And that's the reason why ARM is gaining so much of traction lately. Not only on the hardware front, even on the software front, almost all major OS provider now provide port for ARM with regular releases. Most of the leading software, including MariaDB has been ported to ARM and there are regular releases of these softwares for ARM. Beyond that, if you look at the developer and programmer interest, a lot of the new generation developers who have already been connected with the ARM community, because they were trying things on kids like Andrin, Oras, Beripai or they were working with for Android are already joining this complete community and they are going to just expand the ARM community further. So overall, it's pretty healthy ecosystem that is getting evolved around ARM. And that's why more and more users and developers and organizations are, you know, realizing the importance of ARM and have started to work towards porting their softwares to ARM. With that quick intro, now let's understand what is the state of MariaDB on ARM? So MariaDB on ARM, MariaDB has been releasing packages for MariaDB server on ARM for quite some time now. In fact, MariaDB was leader or the forerunner in this to restart releasing packages. Not only does MariaDB release packages, they have accepted quite a good number of optimizations and they have also done self-work on optimizations. And that way MariaDB server is we can say is pretty kind of optimized for the ARM. Now, even today, when they do evaluate new feature, they given that it is support, ARM is part of their support already. So they do evaluate this new feature, especially the performance feature on ARM with help of community. Currently, MariaDB offer packages for distros like CentOS and Ubuntu. And, you know, in upcoming days, this particular matrix is going to improve further and MariaDB on ARM packages will be offered almost on all the other distros, making it on par with other architecture. Of course, bug fixes are looked with the same priority like other platforms. And performance, we are going to talk more about it. But on performance front to MariaDB on ARM scales better compared to its other counterpart. Ecosystem in general around MariaDB on ARM is also growing. We will also again have a talk, you know, a small slide about it in the follow-up things. So overall, we can say MariaDB on ARM is complete production ready. And, you know, if user are thinking of evaluating it or probably trying it, I think it's we can say so that it, you know, user can start using it. So if you talk about ecosystem, since database rarely exist in standalone fashion, but server, of course, is the main thing, but there are a lot of other supporting components. So server or MariaDB server, as we know, is already offered. HA solutions, if you are looking for in form of binlog applications or Galera clusters, that also are inherent to the MariaDB server. Maria backup is there are packages available for ARM. Load balancer proxy SQL. Of course, there are packages again available for ARM. Max scale is missing. Hopefully by community push, MariaDB will start also port the max scale on ARM. PMM community has evaluated it along with Percona engineers to and PMM does work on ARM connectors also is there. Percona toolkit, most of them are pulse script and just work out of box again, community has evaluated them. So what is we can say all in all, if someone wants to run a full stack ecosystem using only open source software on ARM, then is that possible? And yes, it is possible. And from every category, we have at least one or two tools or softwares being available. And the good part is this is expanding. So as of now, you see, you may see one tool in some categories, but maybe in upcoming days, you would see more tools getting added. Now, if user do care about running MariaDB server in form of, let's say orchestration in using Kubernetes or Docker swarm, then of course, both of them are also available on ARM. So, you know, it gives a pretty good confidence that any user can actually start running an end to end system completely on ARM. So with that quick, you know, discussion about would state of MariaDB on ARM, let's now jump over today's main topic of why consider migration. So I'm sure when we talk about migration, most of the users saw DevOps would consider that, oh, well, why should we migrate? Because migration of course has its own challenges. And then there are a lot of questions like what if this happens and how do we migrate and all. So let's understand the motivation behind migration, right? So the biggest motivation here is the cost saving. So ARM is different processor. So you can actually if you want to study ARM performance, of course, we are going to talk about both the model, but the model which we are using here is the same cost model, or what we call as a CPM. So what we did here is we kept the cost same, and all other resources same, and we allowed the compute power to differ. So for the same cost, we got more to two to 2.5x more cores of ARM. And fortunately, with these extra computing power, MariaDB on ARM was able to scale better. And as you could see, on all the use cases, MariaDB on ARM has once it crosses certain scalability point, it has scale. And in some cases, the difference is almost more than double. That's the magnitude of extra TPS user could get for the same cost just by moving on to ARM. Now, from developer perspective, just trying to get back to the old model where we actually allow the cost to differ, you know, ARM being cheaper, we all know about it. But then if we keep the same compute power, or we give the same compute power to both the variants and along with all other same resources, and we try to compare the performance, then you would discover that ARM is powerful enough, and it is able to be at least on par with its variant, or in some cases also beat its variant by some significant margin. Now, what it effectively means is that for the reduced cost, and which in some on an average, it shows that roughly 50% less cost, you could get the same TPS. So that extra 50% is more like a saving that you could yield out of it. Now, of course, we have just shown couple of scenarios. We have tried it with other scenarios where there is more contentions, IO bound and all, and the picture is same. So what is the message that is coming out, and why user should consider ARM is more TPS for same cost. So for the same cost, you can get more TPS, or of course, it turns out to be a saving in one way. Or if you want to look at the other way, more saving for same TPS. For same TPS, you could get roughly on an average what we are looking at as a 50% cost saving that you could get. So I think that is good enough reason for any user to consider why we should consider for migrating to ARM. Now, let's understand now that we have convinced ourselves that, okay, well, you know, migration to ARM could be one of the good thing, we will be saving on significantly saving on the cost front. So how do we migrate? Now, we all know that database rarely exists in standalone fashion. So they would exist in some kind of setup, where it would be either a multi master setup or a master slave setup. And of course, there would be other components like backup load balancers, high ability monitoring and all. So it's important that we consider some of these aspects when we talk about, especially the setup when we talk about migration. So first, let's talk about multi master setup. So multi master solution, which is inherently offered by MariaDB server is through Galera replication, wherein you could actually write and read from any of the nodes, you know, and the protocol that is used for this internal replication is Galera replication protocol. Now, this protocol is completely independent of the architecture or basically what processor is being used. As long as the processor is speaking little Indian, the stream will be able to interpret. And you, it doesn't matter whether you are running the instance on X processor or Y processor. So from that perspective, adding an extra instance of ARM should be a straightforward task. And it is, you can start booting up ARM instances and start joining them or pointing them to the existing cluster. And you know, they will simply join. Fortunately, the solution itself has all the complexity taken care of, especially with joining of the new nodes and making sure that the dumps is and everything is restored. So if you have a multi master solution, you just need to boot up couple of ARM instances and they will be part of your cluster. Now, eventually you can of course, decommission the existing node. So migration for multi master is pretty straightforward. Simply start booting and start working. Now, there are few challenges that everyone should be aware. And I would say not challenges, but some things to keep in mind. So of course, if you have, you know, odd, I presume that the standard practices to have odd number of nodes, you boots another odd number of nodes on ARM that makes it even if you're going to retain the setup for some longer time for evaluation purpose, then that will create an even number of nodes, which is not recommended. So you may need an arbitrator overall throughput of the node depends upon the slowest node. In our case, this could be the old node, which are slower node, because this time when you booted the new nodes, you based on ARM, you may be operating with double the computing power. And then you may be wondering why is my performance not getting increased. And the reason one of the reason could be that, you know, your old nodes are kind of acting as a critical path. Additional nodes. So when we have three nodes, the commit latency was the commit has to get certified by the other two nodes. But when you have six nodes, the commit has to be certified by five nodes. So the commit latency is also will be more. So if you're just going to do a comparison, it would not be an apple to apple if you're comparing three node cluster against six nodes. And depending upon which cloud provider using what kind of you know, not even the cloud providers who are offering ARM instances, it may not be available in the same AZ, then you may have to boot it in different AZ. So all those aspects needs to get considered. It could, you know, if you in case you are going for a geo distribution or something, then you there, it's possible, just that you will have to tune some parameters in your Galera MariaDB setup so that you know, ensuring that you don't lose because of the geo distribution. So a complete alternative solution to that could be setting up a separate ARM based Galera cluster and linking it with the old cluster using a sync replication. This way there is the drawbacks which we discussed could also be taken care of. And this way you could independently evaluate the cluster completely based on ARM. Now, of course, it has its own challenges too, but this is an alternative setup. And when whenever you have done with your evaluation and you decide to do a complete switch over, then you will have to carry out your master switch over protocol, the standard way. And then you can continue with your ARM based cluster. Now let's talk about master slave setup. Now, this is a typical setup even in the previous setup load balancer is there, but just that show it out explicitly. You would have master, you will have replica, you would have some kind of a, you know, MHA kind of solution for eventually if master goes off, then replicas to take over and all these things. Now, booting an extra replica based on ARM is pretty simple. You just, you know, have to use to take a backup and restore it. The same thing which you have been doing for existing replicas. And as I said, Maria backup is already available on ARM too. So restoring and copy back and all those things will be possible. So you just take a backup from master or whether you take it from master or replica, whatever is the thing which you have been using and restore it and just start boot an instance on ARM. And you know that instance will then start acting as one of the extra replica for the existing cluster. That way you can boot more replicas. And what we suggest after that is probably you should add them to your load balancer and start directing your queries. This is like a stop gap step where you could now start evaluating whether ARM is no kind of, you know, to get a confidence that is ARM able to perform to the stage which you are looking for. I'm sure it would be. And this is not, this being an async replication, it is not getting affected that, you know, if your ARM servers are fast enough, they will help you produce the better throughput. And of course, once you are convinced about it, you can actually go ahead, boot your standby master on ARM and do a switch over. Again, since most of these things are not, you know, the protocols which we have been talking about, whether it's been log or Galera are not process dependent. So there's no issues at all. And, you know, so basically adding new replicas or switching over or moving to a new setup is as good as just whatever strategy or whatever protocol you have been using for old days is the same thing. Nothing changes there. So isn't that simple indeed it is right. So most of the people first, you know, whenever we spoke to them first time, they said, Oh, migrating to ARM, what is it, what would you take, what are the problems and all those things. And I think it is pretty simple. ARM is just a different processor come from the same little Indian family. So it shouldn't be an issue. No, but of course, there are open issues and challenges. These are not like hurdles, but something which you should keep in mind when you evaluate because I'm sure most of the user will first evaluate and then they will accept some things. So we want to make sure that for a fair comparison, you are aware of these important things. Now, if you're booting an ARM instance on a different cloud provider, then you should keep a watch on it because your different, you know, sometimes your local cloud provider may not provide ARM instances. And if you are, if you want to add a different cloud provider, then, you know, you may actually think, Oh, well, should I do it? And then there could be an approval process and all. So that's that's the first thing you should remember, whether your provider provided or not. Then the cross cloud moment is if it's going to happen, then the data has to be secured. Most of the time, the local data on data center, even though it is suggested to use secured communication, but sometimes user do use a normal communication. So but even you are going to have a cross cloud moment, make sure they are data secured, booting ARM instances in different AZ. So that also needs to be kept in mind that sometime, you know, the cloud provider may provide it. But as we said, not all cloud providers is providing ARM instances in all AZ, maybe that will change as as and when the more and more machines get added. But yes, so if in case you are going to spawn across AZ, then make sure you understand what effect it has on the overall system, a replication latency could be higher again, because of the way the things would get placed, there could be increased latency, because if your application is located in one AZ, or some some part of the place which was closer to the old setup, and now your new setup is at a different place, then the latency could increase, then reconfiguration of backup if you are going to continue using it and do make sure mobile most of the common tools and scripts, sorry, common tools are already available. If you have anything which is custom tools or scripts, then check them out whether they are actually present on ARM. But on contrary, what do you achieve? As we already said, cost saving of average 50%. A lot of user wanted to go and set up something in geo distributed fashion, which was not possible before. But now that you have this extra thing, you could also explore that there are increased replicas with more redundancy. There are users who have like hundreds and thousands of read replicas. Now just either they can reduce the cost or imagine that they can just double their capacity instead of 100 replicas, they will now have 200 replicas. This includes flexible topologies, right? So you could actually have a dedicated replicas just for backup purpose, which probably was not possible because of the restrictive things. You are adopting to a new improving technology where more number of cores are there and it is just supposed to improve further, you know, as and when more things get added and all that without affecting of course, then user and stability. And the good part is, you know, most of the data generating devices are already on ARM. So having the data processing device like databases on ARM would also make it complete. So you could actually have host everything under a single thing. More and more applications and softwares are now taking advantage of this extra compute power and cost advantage and moving things to ARM, including big data applications. So it could be really helpful. So with that, we will now open the session for Q&A. I hope you have understood at least got the basic idea that moving or migrating to ARM is pretty straightforward and what advantage it has. And again, I would like to thank our organizer and sponsor for making this possible. You can remain connected. We have a blog and there is also a dedicated MariaDB Zulip channel, MariaDB on ARM and you can of course follow the tweet. Thank you.
MariaDB has been releasing packages for ARM for quite some time now. ARM is known to have a lower cost of ownership there-by delivering more TPS for the same cost, effectively generating cost savings. Any changes to the working production environment would surely make DBA/dev-ops anxious and we are talking about migrating to a different architecture altogether. How complex is that? Is it feasible? Is it the right time to look into it? What about the ecosystem/other aspects? We will try to answer all such questions through this talk. We would discuss the why and how aspect, highlighting the step-step approach how user could migration existing MariaDB cluster to ARM, tools compatibility, do and don't, revert back to the old system (if needed rarely), etc....
10.5446/52713 (DOI)
quadrups modellisee hun захelVID r saúde keitest hammer j Beautiful Atomic hyvät ihmялаa. Mutta k science Mind metrics LLCs. Confcussion as a restart v Jeremy. We decided my school there was something Bass G it is Missable First Foreign Definitions so bad. That upgrade will be easy and. You should be able to easily to olsun mu amendmente t Suzuki takavalle uncertain er trucker показke on niin objects lushas hatredat man joolle johtaa awareused cooks and lav架ай. Merd yksinsi on niin, että nailsit alevat, että groupsin voi päät damageilla voyittantymia siellä. KSRJ maksimi tah Drag Robo! Minulla on tämä dua hattaklijkos,Ükra rakessaa ahauvuring stagnantien askel oopsav Eye output nik soilabuit. KSRJ loenda. Minua t mandate directed hyvin tähän frektתי l rabbit ig gravel gint pain sealupЯ. Kun te開始 shaving beni tai taiify lächki, tallosta katt veto된 vuodesta. Ja tuottees backingopian tunnan importantejen jos sinulla on koko koko koko. Jos sinulla on koko koko koko, kun taas kokoa oli koko, mutta ei ole koko koko koko kokoa tai tehdä kokoa, kun vain koko koko kokoa, sinulla on jotain, joka ei ole koko koko. Ja jos sinulla on koko koko koko koko koko, esimerkiksi, haluan kokoa a ja b, jossa olisi kokoa a-kokoa, ja b-a ja c-b, ja jos sinulla on koko koko koko koko, ei ole koko koko koko koko data, joka on koko koko koko. Ja mitä on tärkeää, on, että jos sinulla on koko koko koko koko koko, ja siltä on koko koko koko, ja sinulla on koko koko, niin sinulla ei ole kokoa kokoa, joten laajana ei ole tämän taas, kuin mastera. Ja viimeinen, minua problemeja, mutta vielä problemeja, jossa on taisena taas, on, että jos sinulla on informaation-schema, keskustelua, esimerkiksi, haluatko tiedä, joka taas on koko kokoa tämä kokoa, se voi olla aika suurin, koska meidän on koko koko koko kokoa ja otetaan eri filtä, josta voidaan kysyä. Joten kun me aloitamme toimimaan DDL, oli jotain kaikki, mitä haluamme ottaa. Me haluamme tehdä jotain, joka toimii kaikki tarpeeksi, ei vain DB, kuin Screl, se pitäisi olla vielä hyvin vaikea ottaa taas, koska taas ottaa kaikki paikkaan, ja se pitäisi olla sen, että ottaa taas ottaa kaikkiaan ja ottaa kaikki paikkaan, mutta ottaa kaikki paikkaan, mutta me haluamme ottaa kaikki paikkaan, joten joten joten joten solusi joka olen olenneet toimimaan on yhdessä MDL 17567 ja koko me ei haluamme kaikki, jota on, miten on ottaa idea oli ottaa MDL 17567 joka on usein ottaa, että partiesion on ottaa myös ottaa myös sitä MDL ja me haluamme ottaa, että bindin alue on korrekt. For example, jos sinä olet ottaa ottaa 5 paikkaa ja sinä olet ottaa ottaa tilaan ja bindin alue on ottaa joten joten joten bindin alue on ottaa ja se on ottaa paljon on jo jo tullut ja olen ottaa nyt olen ottaa 4-4-4-täri tai ottaa ottaa ja kaikki olet ottaa ottaa ja sinä olet ottaa tilaan nyt tilaan ja mitä nyt on rename-tavo ja rename-tavo tai jokaiset tai tilaan ottaa ottaa ottaa ja ottaa ja ottaa data-basi ja mitä ottaa ottaa on toけれe joten olen itse ehkä tut pillow kokelee sen jokaa úหมilaat on fajemmin tarpeeksi jokows populo heeftä sen kokemme designated tarpeeksi tuntuu. Tämä kertaa kertaa kertaa kertaa kertaa dataa ja sitten tehdään seuraavaksi kertaa kertaa kertaa kertaa kertaa kertaa kertaa kertaa kertaa ja jotta kysyvät dabua katsotaan, kaikki GoodShoot haluaisivat tärsittää videoilla, joten peil憑iminen haftavuus. T зан yöpyllyä on, että, noskaist измен, kunerschvoitt anthoki ja siivwechsel BloodyItTailin tensesiin, että recruitment securehjelaorilla riijaike on tehnyt, että mitä kielemosa tehdään. Kun siitä traisi elaborate selvittää, racke行in kli grocery scooterä, жизнь tunnistabaan. Jos sinulla hoittaa nya ещеMasterL excitement. Deb leveling-et juttuisi, kun durvet memittävät maat. Sehän san admissioneden lisää toisa합니다. Täällä tehdään lähteä, jauraskela Science妈keita existe. nähdään, että tämä entri on jälkeen veneen. Joten kun jätetään tämä unikko, kuten jätetään entri-dialogin, jossa on koko alue, ja sitten jätetään, kun jätetään entri-dialogin, jossa on jälkeen veneen. Tämä oli nyt koko, jossa jätetään. Ja sitten jätetään veneen, ja sitten viimeisenä asiaan, että jätetään entri-dialojen, joka on tullut, joka on tullut, että otetaan ne 4 käyttöä. Joten, kun jätetään entri-dialogin, ensimmäinen, jossa jätetään entri-dialogin, jossa on jätetään, jossa on jätetään entri-dialogin, jossa on jätetään entri-dialogin, ja jätetään entri-dialogin, jossa oli entri-dialogin, jossa ei tarvitse tehdä mitään, ja joten jätetään entri-dialogin, jossa ei tarvitse tehdä mitään, tai jos kaikki entri-dialogin oli tosi tullut, ja sitten voimme jätetään entri-dialogin, jossa voimme tullut entri-dialogin, jossa entri-dialogin, jossa on jätetään entri-dialogin, jossa on jätetään entri-dialogin, jossa on jätetään entri-dialogin, ja jota yleensi nyt �ksi papannン countingin ollaan. Se on niin, että asettavat uitettiin opetta groviin perustat tutkaismaybe been Standin spray hydro ja ylpeä offensesa, hän sabeu, jos kun tapa Come Laait books illan sitten voidaanén saadaÜ d Craig. j kin一次iin absolutely cykt franc car day kompans for boot views tables enda painterlog se hops zou serini le Dallas da докö Gesundheits jandy fropt property. Runnidifferenti on se että toinen suomalainen dass ennemmin alkaa läänsä Jere 피�äl comparing croptatives tämmöinen on tehty jonkin droptdatabase drove jêtei droptdatabase vaisev disintegra Väpöhen osallistuin Deep Sal are Being Telep一 Se on takaa tämä Yilen on sit Se on mistä se objecti abit Medicine aamushi apasti jok progress Melko par beb cultivate p goes r Earl tai autosync-files. Yksi asia, johon lopuin, on, että olemme myös käyttäneet DDL-rekoveri-kodin, kun olemme tehneet rohvakkeita, kun jotain vaikeaa. Se tarkoittaa, että kodin on aika testoja, koska se ei vain ole käyttänyt kassanenariota, vaan myös käyttänyt regulat, ja se on myös aika yksinkertainen testin. Ja kun joku kodin DDL-rekoveri, olemme tehneet uusi kodin ja testoja. Ja ensimmäinen meillä on DDL-kodin, jossa me otamme kassanenariota tai simulat kassanenariota kaikkiin puolesta logiikkaa, jossa asioita on otettu. Joten olemme tehneet sitä, että jokin, jossa on potensioita kassanenariota tai kassanenariota, olemme tehneet regulat. Joten kodin DDL-rekoveri-kodin on kassanenariota ja joka kodin on kassanenariota, koska me olemme tehneet kassanenariota ja kassanenariota, jossa kassanenariota jokin on otettu ja ei ole kassanenariota. Joten joten voitte nähdä, että on yli 6 kassanenariota ja jokin operationilla jos on yksi kassanenariota, jos on enemmän kassanenariota, joten se on koko ajan enemmän. Joten jos on kassanenariota, mutta olen sanoi, että perustunut kaikkiin etominen ja konsisten on valmis. Ja saatko kun alle. Meidät לךப ja en ole parempi tского kokonaisien tarpeeseen. Joten on lähellä uudessa informaattjarai ja kun kykyi on ei ole tубли serious ja n � ne. Eantrykkä stagedia prosevaluation on harbor ilman 기본ali löysi on täyttävillä ajattelua ja asymmetraalisions ieleistä overepäilos hyödylliseksi. Gr Grand Fortune-sen millä, joille todella elik cominvoltaisteiliä edelleen ei taiko kuin kuja ilmo least Mennään Mu yo pinkootyvä Ky KPan bodol-alused esimerkiksi sijaatastua nämä k luotilijoita sekä informaatiuon äänenameksiä olemme sólo ja se kerron on kuitenkin alcoholisti kerran. zestä yια mahtavaa taiievedt Чер�. Mutta jos sä tu daamassa sitä packeituu S XV sugar hoituu throt att ikävä muuta gez fail sne kom eternal Edias fus on hur reorgan k bone ja n Gl puhust efficiode on прир 그�istys maken ja widely usein pohjaantua vähemmin yhä teemme uudena tietäjärjestelminterورu può pauhaa tiet alk deferraata stayed tekki tiedeidon että pämpäri lamassaa Kun ottaa telhailla apua on verran haltu연 latilleinterusta. On vistoivan hebtый bloga reihoa hanita potem on minua formalLLP. Jeetaken ta archives nämä sivuista. See meva täältysi voidaan joka ihan kaossPaampaa ja joutun jättelää pieni löydään finanjaaleja. T Satav or the case of that somebody has just copied in a table over an old one then we will be able to detect that here there is a difference between the table in the data dictionary and the real one in which case we will go to the FRM to get the latest one. E photosimaita pydy combinean хорошо���uun vähän pickeynä, jossa on mahdollisesti siitä, jotta näemme duotene lookinna obson якsit otamaan bersturit, ja menen haluamme jo lähe consequences YOUM need to do everything relevant eteenpannu grab Pegger grandchildren Ta varaisilla sallittua komparit Suorin Y Gonna tido at there will be available on MaxielkΚ dis session ask me anything you're like into this Kiitos.
Crash safety is one of the requirements of modern databases. Although DML is crash safe (depending on storage engine), DDL is still problematic. MariaDB 10.6 will implement atomic DDL. Currently RENAME and DROP are fully supported for all database objects and the goal is to implement all remaining operations. This talk will go into the technical details of the atomic DDL implementation, explaining how it provides crash safety.
10.5446/52717 (DOI)
So, welcome to the MariaDB Dev Room here at FASTEM 2021. It's the very first dedicated Dev Room at FASTEM. In previous years, we've shared a Dev Room with the MySQL MariaDB and Friends Dev Room. And that Dev Room has always been far over-subscribed. We've had far more submissions than we could possibly cater for. If you were in Brussels in previous years, at the university, you may remember the roomful signs, the jostling for seating, being too afraid to go to the bathroom in case you lose your seat and you can't see the next speaker. So one benefit of FASTEM 2021 going virtually is that you can watch this from the comfort of your own arm and it can be available to everybody without needing to squeeze into a damp university hall. One year you might remember as well the flood where we had to evacuate the room and move to the other side of the university happily. One of that is a problem this year. We still have the problem though of having far more submissions than we could cater for. We got 33 submissions and we only had space to show you 18. So I'd like to thank the committee who's listed here, Ogaeia, Colin, Daniel, Manuel, Svetta and Hachansu who had the thankless task of going through the 33 abstracts, looking at the details and deciding on the 18 talks that would finally be of most interest hopefully to everybody here at the FASTEM community. So thank you very much for your hard work in selecting such a fantastic range of talks for us. This is the schedule and one benefit of pre-recording everything in this video itself is pre-recorded is that it's very unlikely that the schedule will change. So what you see here is hopefully going to be the final schedule. You can double check at the FASTEM site itself but I'm fairly certain that this will be what's coming up next. So how it will work. The videos as I say are pre-recorded. In the video you'll be able to use Matrix to chat potentially to the speaker. Not all the speakers could join us here but you're able to ask questions of other people listening and perhaps get an answer from the speaker. After the video is finished there will be a Q&A session. Some of the Q&A sessions are unfortunately very short because the video is around over time. But this will be a video live Q&A video with a host and the host will pass on some of the more interesting questions to the speaker to answer live. At the schedule time for the next video to start, the video will start automatically so there will be no running out of time, no chance for being late. If you are still interested in carrying on the conversation with the speaker there will be a separate always stream where you can basically as it will go outside of the venue and continue speaking to the previous speaker while the next video begins. At the time of recording this video it's not completely clear to me how this is going to work. FASTEM was still doing some last minute tweaks. I know it's using the Matrix Chat protocol. Elements is the chat app. Elements has been blacklisted by Google Play as I record this so hopefully all that's being sorted out and you are able to connect quite easily. I've also listed a URL of the conference companion app that might be useful. If you are stuck and you do need help you can't get to the chat or you can't work out how the hallway works please contact the chat app.
A brief introduction and overview of what you can expect from the MariaDB devroom at FOSDEM
10.5446/52719 (DOI)
Hello, this is a talk titled JSON support in MariaDB, use, non-use and the bigger picture. My name is Sergey Petronyev, I am a MariaDB developer, so for me, use means new code, non-use means old code, so we are going to talk about the new code in JSON support in MariaDB as well as some of the code that is already there. The first topic is the JSON path. This is not use, there are multiple JSON functions in MariaDB already and a lot of them accept JSON path expressions as an argument. These are used to locate elements in JSON document. Path language in MariaDB wasn't documented until recently, perhaps some people thought that it is just follow.bar.buzz or what can be its obvious, but the reality is not that easy. If one searches for the definition of JSON path on the web, you can find many different definitions, but the one that should be used with SQL language was defined in SQL 2016 in the section SQL JSON path language. So let's look at that in detail. A JSON path starts with the mode. The mode can be either lux, which is the default that is used if it is not specified or strict. Then goes the dollar sign. Dollar sign denotes the context element. By default, it is the root of the JSON document, but some functions may use other context elements. Then it is followed by multiple steps, which select the elements in the JSON document. The first kind of step is object member selection step. You can select a member by its name. For example, here there is a JSON object and you can select name.name and you get the value MariaDB. You can select all members by specifying.star and this produces a sequence of values of all members of the array, in this case MariaDB followed by 10.5. Strict mode requires that the context element is an object and if the member name is specified, it must be present in the array, otherwise it is in the error. Lux mode ignores missing elements, unwraps one element array and tries to avoid errors. The second kind of step is array element selection step. If the context is an array, then you can select one of several elements from it. You can just use number, the numbers are zero based, so if you take element number zero and you get the first element in the array, you can specify a range, you can use the word last to specify do you want the last element, you can use a comma-separated list of any of the above, or you can specify all elements by using star. In this case, you will get the sequence of all elements in the array. Strict mode requires that indexes are within the bounds, otherwise it is in the error. With Lux mode, there is no such restriction. After each step, optionally, there can be a filter. I mentioned that we get a sequence of elements and filter allows to filter out elements of the sequence. The syntax is question mark and then predicate. It can be a formula, it can be a logical and or formula which has comparisons of constants and elements which uses arithmetic. There are certain functions supported and you can refer to parameters, pass it from outside. Here is a basic example. We have a JSON document with two objects, we select them both by specifying we want all members in the array and then we want only those that have member color equals to black. Then we add another step which is an item, which is a select member named item, so we get just one value laptop here. I won't go into further detail with filters because neither MariaDB nor MySQL supports them. Let's switch to discussing what does MariaDB and MySQL support. They support only Lux mode. I have actually discovered that even there they are not fully compliant. Then I discussed this question with MariaDB developer who developed this feature. He said he was just following MySQL. MySQL market is bugger verified so at least perhaps they intended to be compliant. We don't know for certain, they don't state in their documentation. Object member selection is supported. It's the basics. For array selection single index is supported by both. MySQL also supports last and M to M. As I mentioned, filters are not supported. The people who coded this to cut down a lot because they didn't need to support expressions or arithmetic of functions. On the other hand, it uses the expressive power of JSON paths quite a bit. There is however an extension supported, recursive search. For example, we have a task to find certain member anywhere in the JSON document. With SQL, JSON path is not possible. You need to specify the full path to where you want. So both MariaDB and MySQL support an extension, while the current search steps. If you use double star, it will select all direct and indirect children of the current element. Then you can add another step to select member of the array. This way you can find, for example, price, element number price, limit price anywhere in the JSON document. PostgreSQL also supports this extension, but their syntax is dot star star. So the syntax is different. But the semantics is the same. How is that useful? Well, one and the use that I'm using almost daily is optimizer trace. Optimizer trace is a JSON document. It's basically a look of query optimizer actions. This is a huge JSON document. It has recursive structure because SQL allows recursive constructs like subqueries. So we get the same kind of actions done at different levels. Some of the actions done by the optimizer are done for each table. One of them is to estimate the number of rows in the table, rows estimations. So you can write a query like this, select JSON detected. It's just that formatting JSON extract. And then dollar star star rows estimation. It finds you all rows estimation that we've done for the tables used by the query. In the top level select and the subqueries everywhere. If filters were supported, we would be able to find row estimation for a specific table. Unfortunately, they're not so we need to get them all that just look ourselves. So for deeply nested JSON documents, wildcard search step is very useful. Summary. So the language for pointing nodes in JSON document is called SQL JSON path language. It was introduced in SQL 2016. Both MariaDB and MySQL implement is upset. It's now documented. Lacks mod only. Filters are not supported, which is unfortunate. Array indexing only supports single integer index. And both MySQL and MariaDB have the same recursive search expression. That is good. As if you look at other databases, PostgreSQL seem to have the most compliant and feature-rich implementation. They support filtering strict locks mode and so forth. I'm not sure if they are fully compliant, but they seem to be. Other databases support different and often very restrictive subsets. So here my score and MariaDB are quite competitive. The next topic I wanted to discuss is news. It is JSON table feature. News because it is under development in MariaDB. JSON table is a table function. It takes JSON input and converts it into a table. So unlike other functions, use it in the front close. It was introduced in SQL 2016. It's supported in Oracle database in MySQL. It is under development in MariaDB. We are trying to get it into MariaDB 10.6. Let's start with an example. We get a JSON document which is an array of two objects which have the same elements. And suppose we want to get that data in table or form. So we can call the JSON table function. We select star from JSON table, then we specify which cobs we want and we get it in the table or form. Let's look at the syntax for detail. Select star from JSON table. As I've mentioned, JSON table is a table function. So you use it where you would use a table. The first argument is JSON doc is a source document. In this case, it's in a session variable, but it can be in other places as well. The second argument is a path to node, JSON path to nodes to examine. This produces a sequence and then each element of the sequence produces one, its own row. Then go column definitions. This is a bridge from schemaless and typeless JSON into the highly-typed world of SQL. This is a schema on read where you specify what schema we expect to get when we are getting the data from JSON. And here we specify paths where to get the values from. The context for these paths is the node being examined. If that was not complex enough, it gets more complex. Nested paths are supported. Suppose we get some JSON document describing some items and they in turn have some code arrays in them. How do we normalize this structure? JSON table supports nested path syntax. So we first specify that we will break the JSON document into a sequence of objects. And then inside object we specify that each color will also produce its own row. And the output will be normalized. Normalized form where the first object which describes a laptop will produce three rows for one for each color. And for the second for the t-shirt we will get two rows for one for each color as well. This allows to switch from nested structures to normalized form in a relational form. Multiple nested paths are supported. You can nest nested paths and one can have sibling nested paths. The standard allows one to specify how they want to unnest. It's called plan close. The default is outer join like. So if we get an object which says name to shirt, which can be two colors and three sizes, we get all of the colors with size null and we get all of the sizes with color being null. The plan close allows to specify from cross join or some other variants how to normalize it into relational form. My score and sum MariaDB will only support outer join like unnesting, which is the default, which basically means we don't support the plan close. Another feature of JSON table is error handling. JSON doesn't have types, so one need to account for possible errors when converting to a relational form. And JSON table syntax allows to specify that after specifying path, you can specify action to do on empty, which is used when the JSON element requested is missing or on the error when there was the type conversion. You can produce a score now, you can use some default value or you can just see me in the error. Both my score and sum MariaDB support this. An important part that JSON table can be used with joins. Let me explain what I mean by it. Suppose you've got the table orders, which there are two rows describing two orders. Under each order, you get a list of JSON document, which describes the items in that order. How do I get that in the relational form? You can write a query like so. You can take table orders, then join it with JSON table, and its argument will refer to table orders, which is not typically possible, but JSON table allows it. And this way, you will get normalized output. This JSON table can refer to other tables, and this produces lateral-like semantics. Contents of the table produced by JSON table will depend on the parameter values. This is great to do normalization. If you have some data in... If you have a table which has some data in JSON, you can normalize the data to SQL by doing join with JSON table function. Summary for JSON table. So it is a table function to convert JSON data to relational form. It has been introduced in SQL 2016. The standard specifiers of all the features may escalate, it will be upset, but in particular plan clause is not supported. In MariaDB, it is under development. This is the generation number, and we are going to implement a subset that is very close to what MySQL did. As for other databases, let me quote slide from Markus Wienand, which shows that basically only Oracle MySQL supported. Now we are going to add MySQL. Well, I know that PostgreSQL have it under development. They had a patch posted on their mailing lists. I'm not sure what is how much work they have left to do. Total takeaways is MySQL 2016. MySQL has implemented a reasonably good subset of it with some meaningful extensions. MariaDB is catching up to it and we include the extensions. Another one more slide about low-hanging fruits. As you saw, there are some missing features. I think JSON is a very good portion of code to start contributing to because it is fairly isolated from the rest of the code and it doesn't have a lot of legacy or compatibility concerns to care about. So low-hanging fruits, in my opinion, will be to implement last or end-to-end array indexing steps. A big and perhaps a bit higher hanging fruit is to add filtering support, but this would be very useful. Much lower-hanging fruit is to improve the JSON-detailed function. JSON-detailed function is a basically JSON-pretty-printed, print JSON in a readable form. This is very useful for development. This is also useful when dividing the optimizer, because you can get the data from the optimizer phrase, do some processing, and then you need to print it for viewing. That function works, but it's fairly dumb. So we would be glad to review and accept the contribution to make it produce more condensed output. All these are great ways to start contributing to MariaDB, so please contact us if you are interested. This concludes the talk. Now we can have a Q&A session. Thanks.
This talk aims to cover everything about the current state of JSON support. First, I'll cover the newest addition, JSON_TABLE in MariaDB 10.6. Then, I'll discuss the power of JSONPath in MariaDB and how it compares to the SQL Standard and other databases. This is technically non-news but it hasn't been discussed before. Finally, we'll take a look at other present and missing features and see what are the biggest and lowest-hanging fruits in JSON support in MariaDB.
10.5446/52724 (DOI)
Hello everyone, this is Pika Pli from Watcher Technology. Today I will introduce to you how we help customers migration from org to MeriDB with their application change. I will introduce a special migration progress for the following aspect. First I will introduce a background of migration and introduce why customers want to migrate to MeriDB. Then I will introduce a special migration process, how we migrate step by step, what work we have done for this migration, what tools has been developed, etc. Finally I will review the entire process to make it easy for everyone to understand the entire migration process. Firstly let me introduce the background of this use case. A customer who has the IT system are purchased from ISV, which means independent software or developed by outsourced personnel. The old application uses a traditional architecture and run on microcomputing like IBM Power and org database, but most of their new systems allow run on open source database based on x86. We all know that developing a system may only take one or two years, but maintain a system may last more than 10 years. The resend is of outsourced personnel and even a closure of ISV made it difficult to maintain an old system. In order to unify the IT architecture, our customer tried to solve this problem from two aspects. First they require ISV to develop a new application system based on x86 Linux and open source database. Secondly they try to migrate the old system to the new IT architecture. Migration from IBM Power and EMC storage is relatively simple compared to database migration. Part of the old Linux system was migration through application reconstruction. However there are still a number of applications either because small ISV vendor has closed down or because the technology selection of large ISV vendor cannot be controlled, which has delayed the migration of database to open source database. The problem now changes. The old application can be migrated, but ISV cannot modify the code to migrate the database from org to the open source database. So customers began to explore whether there is a way for ISV to migrate to open source database without modifying any code. Which of the open source database, MySQL, RedDB or PostgreSQL is most suitable for hosting such a business. Firstly the customer kick out PostgreSQL from the list. Although PostgreSQL has a lot of syntax similar to org, but its ecosystem is not good enough. There are a low PGDBA on the market and no professional vendor support can be found if there are any databases. Secondly the customer exclude MySQL. MySQL is compatible with org syntax. If ISV cannot modify the application code, that may be cost years to change MySQL from migration. So based on MyDB, it's a best choice. MyDB can share the MySQL ecosystem, which has lots of third-party tools, lots of technical documents. Many engineers are familiar with it and there are many commissioners support. In the model of SQL mode and org, MyDB can be compatible with org syntax, including data types such as VATHA2, number, etc. You can create a sequence to get continuous increasing venues and it even compatible with the storm-persuaded language of PL Circle. In addition, MyDB has already done migration from org to MyDB in the Bank of Canada. Therefore the customer finally choose MyDB as an open source database from org. As part of MyDB in China, which technology has helped the customer complete the first migration from org to MyDB in September and we will help customers for migration to more applications from org this year. Let's summary the whole background. The customer has some legends application using org. The business logic is not complicated and the workload is quite stable now. But they still need to pay expensive organizing fee for these applications. Secondly, application with structure and the modified SQL of 8 is impossible for customer. So they need to database to be able to run org very directly. So let me introduce in detail how we help customers to migrate from org step by step and ensuring the other application change. The first problem we faced when we actually took over this project, how does the application connect to MyDB database without modified code? We investigate the customer's application is right in Java and use JDBC to connect to org. So it is doable. We only need to change the connection stream from org stream to MyDB stream. After we start the application, it will call MyDB driver to connect to the MyDB database. If the application is ODBC, the corresponding modification is similar. But the application is right in C or C++ and use OCI to connect to org. There is no way to connect to MyDB without changing application code. The second question. Can second model as the org be 100% compatible with the org? After a simple test, it can be found that MyDB is currently not 100% compatible with org such as low number to check function, etc. The question now changed to which org feature are used in application A and which org feature we need to modify MyDB to be compatible with org. That you say we need to know all the execute circle of application. Corresponding we can enable circle trees and collect circle array info to get the ML queries on org database and obtain the tape structure store, store the procedure, etc. to obtain the meta info. If the circle of the application you place in a batch or use a similar framework, you can get all circle direct from the address file. If the application connects to the org store epoxy, you can capture circle statement and the proxy layer. Watch that clearly we are able to support org OCI protocol in the renal proxy product at the end of this year. After collection, all the metadata and the circle on org we need to analysis which circle is compatible with Meridith's circle model at org. We can use circle name and open source migration tool for anonymous. There are similar org compatible analysis on various cloud such as Alibaba's address and how is you go. At present, watch technology has also developed its own set of tools and others the compatible of Meridith with org. Meridith has some not so for org compatible work like watch R2 and the number of data type, the sequence feature and many syntax compatibility like from now, org style create procedure and functions except for what Meridith B has already done for this case, we still have some compatible work to do. For DDL, org has the enable keyword which is default behavior. In Meridith B, it cannot be changed. So we can ignore this keyword in create table and index. For the ML, the system function already implement Meridith B but the org a lot need bracket. We can modify the syntax tree to move it. The above three tasks are very easy. We can modify the syntax tree to move it. Orcs two-chart function can come word date number into character string. The function is complicated and not supported by Meridith B. But there are similar functions that can be helped to implement it. The most difficult work is the low number. Meridith B cannot support it at all and we cannot find a similar function. Sometimes it looks like a limit but have a more complex usage. According to Ananas and when it renew which part of the code we must modify. Let's try to modify Meridith B. For tasks one, two and three, we just need to modify the circle yard in Wi-Fi file. It's quite easy. Just add a keyword so that the syntax tree can visualize it. After the change of code you can see Meridith B can run the org circle now. Let's take a look at task four. Here is a two-chart function syntax. It can convert number and date time to string. Take the conversion of date time into string as example. Date format FMT. Parameters support a variety of format characters. We have some of the support parameters here. To be forthcoming, that is org to chat function. It's very complicated. For example, org FMT support J which is from junior day. It's a number of days since June 9th first. 4,720 BC. It is not supporting Meridith B so we only support the limited two-chart function here for our customer. To change Meridith B from supporting two-chart function, we can use the date format function to solve it. CircleNAMP provides a table mapping, the formatting special from org to chat function to date format function. To implement it, firstly, we need to add two-chart function as my circleLative function in circleItemCreateDCC file and call item function to chat to convert the date time to string. We must implement classItem function to chat, reaching herring from itemStream function to convert date time. The valueStream function is a key function that you use to make date time org which is similar to make date time function in circleItem time function to CC which is used by date format in Meridith B. Then we can see the result of query as showing feature. Meridith B can support org's two-chart function now. The most difficult is task number five, row number feature. How can we implement the row number feature? Our idea is adding row number as a virtual column based on Meridith B's virtual column. Then add a column named current row number where the default value is 1. When a row is reset, the row number field value to current row number. When a row is sent to the client, the current row number plus 1. So if the query hasn't got a value close, the current row number is the read sequence number. If the query has value close without row number, the row number field will return the sequence number after order by 4. If the query has value close with row number, if the condition is row number big then 1. It will not return any date the same as in org. Once a result set from the sub query can contain the row number field, we will not add row number virtual column again, which is the same with org2. After we have done this change, we have solved all row and incapability. The above org compatible code and case has also been submitted to the Meridith B open source community. Monty, the father of Meridith B and my circle has special give guidance and part of the code has been merged into Meridith B. Among them, the previous version of row number is a limited version only for this application of our customer. Monty has written a version for it and plan to release it in version 10.6. The next stage is a verification stage. We first run all the collected DML internally, modify Meridith B and compare the output result with the result of org operation. If the result is inconsistent, it means that there are a problem with Meridith B we modified it. And we need to find the reason and modify the code to ensure that it's completely consistent with the result of org. After the interlope, verification is completed. Our customer will deploy the application on the test environment. Import real business data and modify the result of application run on Meridith B. The unnamed regression progress is not simple. For business with more amount of data, you can explore the org data logic in a flat flyer after the shutdown, then import it to Meridith B. If there are lots of tables and rows, the shutdown time of application will be very long. You need to ensure that the Meridith B state always keeps up with org's latest data. So the logic for backup and continuous increase in incremental backup. Then when we shut down the application Meridith B, it can be quickly catch up with the org. Watch attack logic currently provides a Q matrix A product for full and incremental backup from org to Mexico. After the migration, the complete matrix can also be really synchronize. Let's circle data back to org. In case Meridith B has any more issue, you can migrate back to org. Okay, let's summarize the whole progress of migration. We sort out the migration step as follows. The first stage, collect application information, include how the application connects to the database. If a customer uses OCI to connect to org, unfortunately, as we must, we write code to use Meridith B driver. And also collect customer metadata information. And the DML circle includes customer data, one and the circle nettings. The second stage, another and evaluate the application compatibility, evaluate the workload of modifying Meridith B and the evaluation of each org syntax function, and database function need to be adapted. The third stage, modify Meridith B to ensure that all is required without to the application are consistent with the result on org. This stage is generally the most common consuming and most challenging work for database developers. The fourth stage, work verifying and migration of the business. The main focus is on the result returned by the database layer on previous stage. This stage require verification and test application layer. And the DBA should provide a professional migration plan to ensure smooth application migrate to Meridith B. And we are very glad to see the AWS, the index to need of cloud, has recent non-shared circle server compatible version of Babel feature based on first great circle, which indicated that it's a lot of all the other company that is based on the open source database to be compatible with commercial database. We also hope that there are more and more companies join it and communicate with each other to promote the develop of open source community. At the end of this topic, I want to say we are very lucky to be able to stand on the shoulder of gender of Meridith B, which help us to migration our customer applications which from commercial database to an open source database with little application code modification. We have created a range of Meridith B special for Oracle Sinex compatibility. We will continue to keep it open source. Thanks very much to FOSDM for organizing such a great conference. If you have any other questions, please send me an email at picaportutak.com. Thank you very much.
Introduce a use case from a Chinese who migrated one of their important applications from Oracle to MariaDB with very few modifications to the application. I will cover the entire migration process and experience, including tools to check the Oracle syntax compatibility with MariaDB and tools to compare the execution results of Oracle and MariaDB, and the proxy that receives and interprets the Oracle network protocol to MariaDB.
10.5446/52728 (DOI)
My name is Valery Kravchuk and today I am going to speak about upgrading MariaDB. In general, specifically about the role of MySQL or recently MariaDB upgrade utility in this process. I am a principal support engineer working for MariaDB Corporation for almost five years already. Before that I worked in MySQL, Sanendorocal in support, then I moved to Prokona. Again I was a support engineer and recently I am in MariaDB. In Oracle I worked on MySQL bugs, so I am still a big bug-addicted person. That reflected in the public appearance in my social networks. So I have a blog named MySQL entomologists, I have a Twitter account named MySQL bugs. I had used to write about MySQL bugs on Facebook and everywhere. But since I moved to MariaDB I switched mostly to different topics and now I just write a lot of how-to and these kind of articles. Speak about MySQL and MariaDB in public, explaining how to solve some specific, often complicated, mostly performance related problem. I was noted as a contributor of the year. What you should also know that in whatever slides I have for whatever conferences when you see anything underlined, it is a link. My slides are always available either at the conference site like today at FOSDEM or always at SlideShare. So you can easily find them, download and use as a reference. A word of disclaimer, even though I work for corporation and this is my MariaDB dev room, I am not sent here by the corporation. I worked on all these at my free time and so I am free to express my own views. It's not a corporation agenda. It's my own views, they may be wrong but I am trying my best to claim only true things I can verify or believe in. So MariaDB upgrades are recently quite popular for many reasons. Some of them is that old widely used versions of MariaDB are out of support. I speak about 5.5, 10.0 and 10.1 recently. So users are planning upgrades and going to upgrade. Customers are somewhat forced to upgrade. So if you decided to upgrade to different MariaDB version or if you decided to move to MariaDB from MySQL or Prokona server, what sources of information you may like to explore? First of all, it's our own knowledge-based public one. I am not going to speak about any internal enterprise, only documentation or enterprise software in general. So I speak about community versions of MariaDB here. So we have a free open editable by everyone knowledge base that has a huge collection of pages under the main one named Upgrade in MariaDB, separate page for MySQL Upgrade Utility. The content of these pages evolved over time. Major event happened last year in April when Monty published his blog post-upgrade in between major MariaDB versions where he tried to fix common misconception about upgrades and explain how upgrades are supposed to work. As a highlight, he claimed that MariaDB is designed to make upgrades easy to do and to large extent it's true. So I started some of the claims there in a separate blog post. That's my blog post. It was inspired by some statements that were widely discussed in our support and with our customers who tend not to do what Monty suggested and we used to suggest something different. In any case, as Monty asked to report every problem with MySQL Upgrade into our Gila, that's also your source of information that I've used here in this presentation. So it will be the first time in the history of me speaking in public when I am going to speak specifically about MariaDB bugs and feature requests for the wider audience. So this session is about upgrades. Best practices of upgrades are documented. Actually you should take it back up, you should shut down your server cleanly if you use an ADB and you do. You have to upgrade the binaries, install packages or just install separate tar balls or whatever procedure you follow, build them from source, then start MariaDB server new version ideally with skip-grant tables for a sole reason to run MySQL Upgrade. So it's documented that it's a must. You may skip skipping-grant tables, you may skip running MySQL Upgrade to end up with a lot of different problems later. So I suggest not to do that. Also why we are discussing, if it's all documented, just follow the best practices, if you had not then it's your own problem. There is one important question. Four years for decades in MySQL world it was stated that upgrades in between major versions should not skip intermediate major releases. So if you upgrade from 10.1 to 10.5 you should do it step by step. And this point was that actually in MySQL it may be the case, in MariaDB it should not be the case in general. Other details aside. So and here lies in this regard on MySQL Upgrade utility. So we have to discuss what it really does and why it may be really the case that you can jump from here to there. One important detail here that when you upgrade to major releases there is a key point. If you upgrade to 10.4 or newer it's a big deal because there are changes in the content structure engines used in MySQL database that are really important. We will also speak about the problems MySQL Upgrade cannot and is not designed to resolve feature requests to MySQL Upgrade and all that kind of stuff. So four years and up to today we can read that Upgrade should not skip versions in MySQL menu. And this same sentence used to be in our knowledge base and it's like a common belief in all support organizations I worked in that you should upgrade step by step. It has some additional value. It does. But on the other hand it's just a lame statement that we had not cared to check and to make MySQL Upgrade work for other cases. We just don't care. Monti's point was that we do care a little bit and we do to large extent. To an extent that we're trying to make upgrades of this kind possible and we are testing at least for simple cases that we can upgrade from 5.5 to say 10.5. Maybe directly to 10.6 there is some automated testing in place. It's still an open discussion if this goes beyond really simple cases. For example if Galera replication is used you cannot jump from 5.5 directly to 10.5 for sure because Galera version 2 and Galera version 4 are not compatible. You cannot just join the cluster of all the versions with Galera 4. But in general MySQL Upgrade is supposed in simple cases to solve all your problems you may have with any MariaDB release starting from version 5.5 and maybe even with older MySQL versions as well. So if you care to find out what MySQL Upgrade really does you can read our updated knowledge based article. Monti changed it to reflect his statements from his blog post. Or if you do not want to blindly believe Monti you can do what I did in my blog post. You can enable general query log, run several or at least one upgrade scenario and see what the SQL statements are executed by MySQL Upgrade. It really works via SQL. So it requires started server accepting SQL statements and it tries to fix the problems in MySQL and all other tables with SQL statements in so called edempotent way. So where you can start from any intermediate position but still end up with the same final result. It's more or less the same as it happens with row based binary logging. So if you started from somewhere earlier position and skip all errors while applying binary log you will still end up with the same result. So this same idea. So if you enable the log you will see the steps what it does really. I will not read it for you. Everything changed up to current release in MySQL database. Then there is an attempt to upgrade each and every user table with the hope it works. Then flush privileges and we are done. If MySQL Upgrade reports an error then you are screwed up. You cannot assume that you can safely use this instance. If it worked without errors what you have to do is restart the server check there are logs that should be no errors after that and then go use your new version. So if you are even more curious like how you can cover all possible cases with just a single upgrade scenario. What are the internal logic in the code? You can go and look at the code. That's my next step. Here are the links for you. There is MySQL Upgrade. It's a client. It's a C program. You can read it. The key function there is called run-scale-fix-privilege tables. It refers to a static array of SQL statements in include file with the word extension dot SQL dot C that you cannot find anywhere in the source. It's created by make. In C make list here you can find it and it's created based on the SQL source code. You can find in this script MySQL system table fix SQL. So it's executed sequentially. And if there are errors they are mostly skipped. So because the problem is fixed later. One day I'll write about all of this in details. Now you can just go and study this source code. So does current design of MySQL Upgrade covers all possible cases? It should be so, but it is not. So there is a set of tasks and I've checked open tasks of feature requests for MySQL Upgrade created since April 2020 and picked up several of them. The key one is an umbrella task for all most of the others. It's called upgrade in MariaDB. So it's collection of missing features or ideas of what MySQL Upgrade should do. Or it's called MariaDB Upgrade in 10.5. MySQL Upgrade is just a link. Remember about it. But it still exists MySQL Upgrade. So for example, as we would like people to upgrade to MariaDB from MySQL there are several related tasks which are not yet implemented. Virtual columns are incompatible so we would like to fix that. Some upgrades will not work. For example, I doubt you can ever upgrade in a binary way from 8.0. So there is a tool, there is a request for a tool that would check if Upgrade is even possible. There are some internal changes in MariaDB that may require some additions that current MySQL Upgrade does not fix. For example, when we switch to real JSON data type implementation it relies on a separate plugin. So if this plugin is missing you cannot upgrade to the version where JSON means something else than just a text column with a set of functions. And so one of the key feature requests that was created by MyCollisionSupport by Hartmut is a request to store MySQL Upgrade version that it successfully upgraded to in the table somewhere in the MySQL database, not in the file as it is now. So it can be requested via SQL. Maybe there could be several rows in such a table so we can have a history of upgrades and all that stuff. So these are main major feature requests. No problems that are not yet resolved we would like them to be resolved. So what are problems that are still there but may be not present in these feature requests? First of all MySQL Upgrade does not do any magic besides those steps I have already discussed. So it cannot fix your engine problems, it cannot fix any big corruption, it cannot start MyRocks if it does not start, it cannot install a plugin if it is needed but it is missing, it cannot install or fix your Aria tables and all like that. What it also cannot do is it cannot fix MySQL check and specifically check table for update. If it does not detect some problem and one of the recent cases, a lot of I was personally affected and customers were affected is related to all date time data types that predates MySQL 5.6. In 5.5 internal representation changed and internal types changed and while versions like 10.1 can work with old date time, newer versions like 10.4 and 10.5 cannot and it can but you can hit a lot of weight problems like corrupted indexes for secondary indexes for partition tables. But MySQL check doesn't resolve it, check table for update doesn't resolve it. MySQL Upgrade relies on that and so it cannot help in its current design. Despite the problems there are also known bugs, again it is a list of recent bugs of a bigger list of like 70 that were reported since April 2020. Not all of them are really bugs in MySQL Upgrade, for example the recent one is actually a bug in the way the docker image is created by the corporation and it is probably fixed by now. Some of them are true bugs, for example there are problems with MySQL event tables. If there are events in this table you may hit problems during the upgrade, it is truly a bug. If there are no events you never notice. The problem is related to the fact that area table is used as far as I remember. There are problems related to statistics table in INODB, they are like historical, MySQL heated first, they are related to the limit on the length of the table named there. And when partitions are represented by INODB table the complete name of this partition table is even bigger. This was later fixed but it is still a problem in some older MyDB releases, not supported by now but still. So there are some set of problems and bugs related to roles for whatever reason. So depending on the way you upgrade it you may start to miss roles information like here. What our QA also found that even though MySQL upgrade was supposed to match the structure properly there are cases when by applying MySQL upgrade to older release you end up with a structure different than in the clean install of the same version. So there is a list of small differences. Most of them are probably bugs. So this is my bug report about old daytime format. There are again yet another problem with roles. There is a problem when a scale node is set off, MySQL upgrade is not designed to work in that way and these three are somewhere in between feature requests and bugs. There are problems in INODB related to the fact that redo look format changing 10.2. And when you upgrade from the older versions you may have problems with auto increment values for some tables. So to summarize. This is a kind of summary that I used in my blog post. Really what you should do is you should always run MySQL upgrade at the very first step. You can skip it for years but then you will get a lot of useless for you error messages in the error logs for years again and you may get bad query results and may get corruptions. All kind of problems. It's really so. I really verify that MySQL upgrade is designed in so kind of a damp and taint way. So no matter which version you start from for major key points in MySQL database it just recreates if not alters all the things. The code is complex enough and it's getting more complex with new versions. So the bugs they expected and as the real history shown us for over last 8-10 months it's like dozens of bugs were reported. Many of them fixed. Some still wait. It's safe enough in a general case to run MySQL upgrade repeatedly. It will refuse to run for a second time but with force option it will work and it should not break things. There is bugs or there is a bug as far as I remember in unclear status and third area. With human that check table for upgrade really works as expected by design. I highlight that upgrade to skipping major versions may still work. That said we are still fighting with bugs and problems during upgrade processes and I suggest you to do what we do in support, report bugs to our Gira. As a last word before our Q&A sessions I would like to thank MariaDB Foundation for this first ever MariaDB and friends development for them. Thank you very much. I'm waiting for your questions. Bye.
With MariaDB in a general case (backup, proper shutdown, storage engines incompatibilities, Galera, async replication, and maybe few bugs and corner cases aside) it should be possible to easily and directly upgrade from one major version to the other, skipping any number of intermediate major versions in between. mysql_upgrade utility is designed to fix all incompatibilities in the mysql.* system tables. In frames of this talk the details of its implementation and actions are discussed, as well as some known bugs and problems that it does not solve. Upgrades to MariaDB 10.4 and 10.5 are covered, from versions at least as old as MySQL 5.5.
10.5446/52723 (DOI)
Hello everyone. Let's talk about MariaDB observability. And I'll just get going as we have a lot of material to cover and not a lot of time. Now why observability is important? Well, if you look at the modern systems, we are really dealing with a lot of complexity. And often we have many non-easier repeatable problems with performance and otherwise. And that means we cannot just repeat the problem in the development system with a lot of debugging enabled. We really need the production observability so we understand what went wrong, why, and how we can prevent that. If you look at the observability which is achieved through data capture, I actually typically see two use cases for that. Here we have an ongoing data capture which you have as a normal monitoring and that should come with relatively little overhead. Or we have a debugging which is a temporary data capture when you can use something wrong and maybe for a given session or given instance you enable much more verbose login and IOS instrumentation which can get expensive, can get us the more details and solve maybe more problems when you cannot solve just some more monitoring. Now it is also important as a background for observability is what you cannot just look at MariaDB alone because the rating system issues, hardware issues would be often the root cause of your problems. You cannot also ignore application issues or various other background issues like in cloud virtualized environments maybe it is not the neighbors which is giving you trouble and not your application itself. Well the fact very brief intro in why observability is important let's talk about MariaDB. If you think about MariaDB, those six are the most important information data sources which exist in MariaDB of the modern versions. First one is show global status or show status. This is something which existed forever and it shows more than 500 status variables. Majority of them are counters though some of them are gauges or text and they can be session and global in a scope. They also can be both queried through the show status command as well as queried from information schema table. Here is an example querying them from information schema table and also looking at the questions which is number of queries right database protest in a global and a session scope. In a global scope you can see the number is very large because that instance has been running for a long time and processed a lot of queries though if you look at the session scope it's just about slightly more than 100 that's how many queries we're giving session I am running executed right not that many. Now though what even if you query session status as this example you can get some data which comes from the global variables right like in a DB row counters they are global they're not kind of not shown for session and global separately right but they will be shown if you query session though or in a global scope right so you have to just know that to avoid being being confused right because interpreting them as if there would be indeed session variables can lead you astray. If you would like to get the output from something similar to being start you can use this this comment which pretty much will print out one of the variables in this case question as with one second increment I found that as a very helpful tool if you really need to focus on one variable and zero out what is going on in in your production. Moving on to information schema information schema contains lots of tables as some of them are going to be schema related and others are performance statistics here is example of information about your state your schema like you can query tables table and see information about your tables and and views. If you look at example of a performance statistics in a D-Metrics table is a great information and in a D-Metrics is something which is a very similar to show status but is focused on in a DB tables and it has some additional information like you can see it can be used mean and max counters as information about when it was last reset and so on and so forth. It is worth to note what by default many of those in a DB variables are not enabled and if you want to look at them all you can use energy monitor enable all or there is also options to enable only only something. Another table which exists in this information schema table is energy in mutixes which is a very cool table which allows us to show what kind of mutixes are having a lot of OS weights and I think that can be very helpful for some of the advanced diagnostics when you have a mutics content. Next thing what you can see from information schema is this process list where you can think about that as an extended process list. As you can see in this case we don't only have information about running queries and with time with microsecond resolution but also information about the statement progress and applicable or how much memory it used and so on and so forth which is pretty cool. The next data source to mention is performance schema and performance schema is very some of the most advanced instrumentation it's available. It has more than 80 tables right so lots and lots of data right and the thing to note though it's disabled by default in MariaDB because it comes with some overhead and it is also very flexible in terms of how much instrumentation and so how much overhead you have and if you enable a lot of instrumentation the overhead may be quite high. If you want to enable performance schema in MariaDB you can add this setting to your MariaDB configuration file and restart. Unfortunately while a lot of performance schema configuration can be dynamic enabling it requires a restart. If you look at performance schema configuration it's kind of tricky if you will but the good thing is what defaults should work most well for most use cases. We can see what there are a number of SQL tables which can be used for configuration with SQL language and also there are common line settings which are similar to those which is important because those tables are ephemeral right when your instance starts the configuration will be lost. So what are configuration settings to give in mind? The actors allows us to configure which users should be instrumented and typically you keep it to instrument all the users. Consumers is what kind of summaries are going to be built and I highlighted here in red this transaction summaries which I'm new in MariaDB 10.5 that is one of their great work which has been done on performance schema in MariaDB 10.5. The second poor configuration is the instruments which basically responds to the instrumentation points events which are going to be counted and timed. In MariaDB there is almost a thousand instrumentation points as of MariaDB 10.5.8 and almost 300 of them are timed and instrumented so you can get a lot of details right by default and you can configure even more of them. Additionally you can configure the objects which tells you pretty much about the database objects such as tables, store procedures, triggers which you want to implement which are all configured through that setup for object stable. So let's look at example of performance schema. In this case we're looking at tables called event current statements current which shows us all the or shows us the current tables and in this case well this is the statement we are looking for we can see information about when it was started, when it's ended, right and you know a whole bunch of other stuff like how many rows it's sent and so on so which can be very helpful on information. Another example could be looking at a file summary. In this case it tells us information about their inadivid data file or actually all the files which share the type of being in a dv data file and we can see how many reads, writes, how many events in total, how many miscellaneous events happen to this file. If miscellaneous when it comes to IO typically being fc and c operation as well as times. Now if you notice in this case the times are kind of some bizarre long numbers and this is before because MariaDB by default, MariaDB performance schema uses a picosecond to measure the time and a picosecond is one thousandth of a nanosecond so that is a very, very exact diamond, right, very high resolution which performance schema uses but which you need to convert to anything which is meaningful. Okay as I mentioned in MariaDB there have been a lot of work done to improve performance schema. In MariaDB 10.5 performance schema roughly matches mySquare.5.SAM performance schema so things like memory instrumentation, metadata login, prepared statements, stored procedure, transactions and user variables have been have been added so that is pretty cool, right, that means some of the most usable features of performance schema are now available in MariaDB. Okay let's move on and look at some additional data sources which is very valuable for instrumentation. It's with the error log, general log and query log, obviously our logs. When you look at the error log you have a lot of flexibility out there, you can send it to the files or by default it goes to the standard systemd.general right from which you can query that. If you look at the slug query log which would log the queries, the execution time, in this regard it's actually much more useful than the query log. Over recent times I find query log is only usable for debugging purposes when MariaDB crashes before query execution is complete. In this case you would often find the query in the general query log before it's logged before execution but it would not be in let's say slug query log. For other purposes slug query log has a lot more valuable information because the query is logged after it executed so you will have execution timing as well as other wonderful data. You can configure slug query log to log only queries which are slow or you can configure that you can configure that to log over queries and especially in the systems where you don't have a lot of queries often in the development system, logging over queries and analyzing them with some tools really gives you the most detail. Though if you have tens of thousands of hundreds of thousands of queries a second, logging every single query becomes way too extensive. This is example of a slug query log with explain which is very cool and especially when it's configured for some login only long queries because in this case you can really get exact explains of the queries which have been slow. So if you look to mention a few words about explain in particular, if you're not familiar with explain please google. Explain is a very valuable tool with both database developers and DBAs need to know. It's really healthy to understand how query is executed. You really want to make sure what your queries are executed correctly using the right indexes and so on and so forth because if they don't you may be putting a lot more load on database when you should. Sometimes hundreds even thousand times more and queries which do not have been optimized well can really affect your database performance very poorly. Explain in MariaDB has multiple output formats you can use. You can also run explain for currently running query. For example you find the query running for some reason very slow already for 15 minutes. You can just run explain on it. Here is your standard explain which is outputs as a table but there are also some advanced explain features. Like I mentioned you can get explain output in JSON format. You can use show explain for connection. In this case you will get the connection ID and then there is also analyze which is a not called explain but it is also a way to understand query performance which would run the query itself but it also will provide you information about how that query was actually executed. Where explain uses estimates about query execution the analyze actually tells you what has happened. Here is example of the explain analyze and you can see in this case you can see actual number of rows, actual number of rows filter and so on and so forth. That is a pretty cool feature. Now this is all good but if you are looking for some more of visualization of this availability data from MariaDB I would encourage you to check out the corner monitoring management. This is the tool which we have developed at the corner that is completely open source solutions for number of open source databases and it is particularly focused on really helping developers and DBAs to find the queries which are running poorly and optimize their performance. With that little plug that's all I have for you and now I would be happy to answer your questions. Thank you.
A broken MariaDB means broken Application, so maintaining insights in MariaDB operational performance is critical. Thankfully MariaDB offers a lot in terms of observability to resolve problems quickly and get great insights into opportunities for optimization. In this talk, we will cover the most important observability improvements in MariaDB ranging from Performance Schema and Information Schema to enhanced error logging and optimizer trace. If you're a Developer or DBA passionate about Observability or just want to be empowered to resolve MariaDB problems quickly and efficiently you should attend this talk.
10.5446/52725 (DOI)
Hello, my name is Vicenzo Ciorbaro and today we're going to talk about MariaDB roles as part of the MariaDB FOSDEM Dev Room. So a little bit about myself. Now I've been a MariaDB developer for more than six years now and I've taken part in implementing a significant number of features including roles, window functions, as well as compatibility layers for MySQL and MariaDB migration, as well as distributions and other topics. So today we're going to dig into roles. The purpose of this talk is to give you an introduction into MariaDB roles, how they're supposed to be used and some of the benefits and differences with other databases. So roles initially were part of the open source database world released by POSgress in 2005. MariaDB followed in 2013 with the release of 10.0 and then they were added into MySQL 8.0 at a later date, 2016. The good part about roles is that most features correspond to the SQL standards. So the implementation is pretty much similar across databases, each with its own differences and extensions. MariaDB follows the standard closest, so whatever you see here, that's what the standard generally requires. So what are roles? Well, roles are kind of like users, but they have some special powers. Roles just like users can access objects in a database. They can read, write tables, create and drop databases or do any other different operation like creating users or setting up replication. What cannot be done, however, with roles is to log into the database server. And this is the first difference we can encounter. MySQL does allow authentication with roles. Now the special powers of roles is that they can inherit writes from other roles. And this is very powerful. We're going to see this a bit later. Roles can be granted to users, much like it would grant a regular privilege to a user, and then they can be subsequently activated. Simply granting the role doesn't mean that the user gets the access. They need to enable the role by the command called set role. Now the benefit of roles is that due to their nature, they simplify administering the database, which means easier maintenance and potential auditing to see which users have access to which resources. And speaking of access to resources, it's trivial to implement the least privileged principle using roles. Just don't enable roles that you don't need, and the solution is achieved. Now you might think that roles have a performance impact. However, we've worked very hard to make sure that it's not the case. For most practical examples, this will not be a problem. And with the addition of default role, which came in the version of 10.1 of MariaDB, with default role, you can integrate legacy applications and make use of roles functionality without having to modify the application at all. To better showcase this, I like to use examples and use cases. So here we're going to take a simplified data warehouse, and we're going to work with transactions. We have a number of users that want to do different tasks. So Rachel is part of the reporting department, so she wants to look at data. We also have an import robot, which its own purpose is to import data into the database, nothing else. Then we have Dave, who is a developer, which means that Dave needs to have full access to the data warehouse databases, including the staging one. And then we have Alex, he's an admin, and he can do all that Dave can do, but also create other users and create basically other developers. Now the way we do this, using roles, is we create roles specifically for this task. So we're going to have to create a report role, an import role, a developer role, and an admin role. And I'm going to only grant select to the report role, insert to the import role, all privileges on the databases to the dev, and also to the admin, but also the admin gets the ability to create more users. Now let's have a look at what's going on here. So the final part is what we call the syntax for granting roles. And here we grant Rachel the report role, we grant the import robot the import role, then we grant Dave the developer role, and finally Alex the admin role. If we want to look at this visually, roles are in circles, brown, we have users at the bottom, and arrow represents a grant. So Rachel got the report role, and then the report role in turn has the select privilege, the same for import, dev, and admin. Notice that there is a lot of redundancy between the dev and the admin role. And the question is, can we do better? The point is to simplify the way we do database administration. So how do we do it? Well, this is where roles hierarchies come into play, and this is the best that roles have to offer. You can eliminate any duplication by using inheritance, much like you would when creating a class hierarchy in a object-oriented programming language. Now the rights that were granted to a role propagate to grantees, and that means that we can, here's our example from before. Now here are the duplicate commands. Now we can get rid of all this by simply giving the admin role, granting it the developer role. And by doing this, we've eliminated duplication, and we've achieved the same thing. So admin now inherits all the privileges on the data warehouse and staging databases. And here's the visual example. These are the grants we are removing, and we're simply removing them by granting admin to dev directly. Now we need to be careful here, because simply granting roles can introduce problems if, for some reason, trying to grant a role to another role would create a cycle. So for example, if you have role A granted to role B, and then role B granted to role C, and then finally you're trying to grant a role C to role A, you'd create a loop within the roles. And MariaDB will disallow this. You'll get an error when you try to create the loop. MySQL on the other hand will let you create cycles. Now the problem with cycles is that effectively they nullify the hierarchy, and every role in the cycle has the same rights. That's why you need to be careful, because you might end up giving a certain role more rights than you expect. Now when it comes to performance, you might be wondering how is this achieved, how is this whole role hierarchy stored, computed, and used for a privileged checking? Well, you don't want to check the whole hierarchy for every query. That just takes too much time. So to fix this, MariaDB and MySQL implemented caching methodologies. And the way MariaDB does caching is when we first start up the server, we load all the grants into memory and pre-compute the effective rights for every role in the graph, which means that we no longer need to go deep inside the graph to find the effective privileges. The first jump has what we need. We only recompute this graph when stuff changes. For example, during a grant statement, which usually doesn't happen very often. MySQL on the other hand has a slightly different approach in that MySQL will, whenever a user connects, it will create what we call an authentication ID. And this authentication ID is a combination of the username and all the roles that it has active at that point. And it will compute the effective rights for this whole combination and store it in the cache once, which means that for subsequent queries, the checks are going to be fast. But the first time you do this connection, it's going to take a bit longer to get the correct access. Now, with that said, both databases do optimize for the critical path and that is frequent queries and not the less frequent grants changes. MySQL's implementation effectively has no performance penalty on grants, whilst it does have a slower first access check. MyIDB has slightly slower grant operations by computing the necessary steps during the grant, but overall, the performance should be very similar when it comes to checking. Now the other thing we need to look for is how can we do the least privileged principle? And this is simple, but let's see what can happen if we don't do it properly. So let's imagine Dave has a tricky bug to fix and he needs to clone some tables to investigate. So with our previous example, he activates his developer role by doing set role and then he gets all the rights necessary to work. He creates a transactions staging table, just like the one in production, and gets to experimenting with different transactions. Finally, at the end of the day, he finishes, finds the problem and wants to clean up after himself. He does this by issuing drop tables transactions and suddenly he realizes he forgot to change the database to be staging. So because he had the rights on both the production and the staging database, he didn't drop the correct one and now he has other problems to deal with. Now all this can be avoided and the way you should do this is you should spill it roles into safe and dangerous ones. And by dangerous, it can be any role that has access to data and is able to modify it. Safe roles are probably just read-only roles or something that it cannot mistakenly make your application have some downtime. And you should only activate dangerous roles whenever you need them, but not for longer than that. And the good part about roles is that changing a role is very cheap, whilst previously you would have to change users, which would incur potentially a bigger overhead. So this is the advantage of using roles. Roles between MySQL and MariaDB are, there are a few, but they're not insurmountable. So MySQL allows enabling multiple roles at once and this can be done with MariaDB by doing an aggregate role. So just create an intermediate role and you're going to activate that one. The intermediate role should have all the multiple active roles granted to it. Additionally, MySQL implements mandatory roles and these are always active for a user and they are granted automatically when creating a user. MariaDB does not have this, but you can probably achieve a similar setup with default roles. Both MySQL and MariaDB have implemented default role and the thing about default role is that it effectively runs a set role command during connection, which means that you have the rights as soon as you connect to the database, no need to run a separate query for that. Now let's talk about system tables. So MariaDB stores its roles in system tables. Before 10.4, the table name was MySQL user and now it's been renamed and changed in 10.4 to be MySQL global priv. For both cases, you need to look for the is role flag, either as a column or part of the JSON description document. If the is role is marked as yes, then you cannot log in with that entry because it is designated as a role. Other than that, the roles are completely identical to users. You will find the same fields such as the access bits for each individual grant. Now, secondly, we need to store the grants somewhere and these are basically stored as edges in the graph. MySQL roles mapping is what MariaDB uses. There is an additional column inside, which is admin option. This column means that if a role is granted, it can be granted to others by that user. It lets you propagate the role to other roles or users. Metadata about roles can be fetched from information schema. We have enabled roles and applicable roles. MySQL 8019 has introduced some more tables that are part of the SQL standard and MariaDB should probably implement them as well, most likely in the next release. These roles are role table grants, role routine grants, role column grants, and admin role authorizations. All these are things that MariaDB can implement and it probably will. For the moment, MySQL has an edge when it comes to information schema completeness. Overall, both databases do allow for role-based access control. From a functional perspective, they pretty much get the job done. There are some differences, but most of them should probably not impact your use case much. There are some differences when it comes to migrating from MySQL to MariaDB. This is what I'm going to cover next. Given that MySQL introduced roles in 80, the only way to migrate from MySQL to MariaDB is by using a dump and restore. For system tables, this is not enough because the corresponding tables do not match. MySQL uses different tables to store roles information. It's been working this area, though, and thanks to a task called MDEV 23630, it is possible to now drop to dump roles, not as insert statements, but as create user and role statements. You can do this by running MariaDB dump on MySQL. The flag is a dash dash system and the switch for it is user to get all the user grants. There still are problems that cannot be solved automatically, at least not at this point. There are privileges in MySQL 80 that do not map to MariaDB's privilege system. The names do not match. You can get a dump from MySQL with the correct create role and grants, but you will have to edit them manually whenever you encounter things that do not work on MariaDB. We have created a task to track progress on this. I believe there is a need for as much automation as possible, but there is a limit to how much we probably should invest into simply maintaining the differences between MySQL and MariaDB. This is actually a question for those listening. Do you think we should pursue the use of migration from MySQL to MariaDB more? Or do you think that the current setup is enough? Let us know. With that said, I couldn't be here without FOSDEM having organized this great conference. I also couldn't do my day-to-day job if it weren't for my DB's Foundation sponsors. They helped enable me to work on MariaDB and be part of the MariaDB community. Thank you for everyone involved. You can reach out to me on Zulip. You can also find me via email, vchansu.mariaDB.org, and you can find more about me at MariaDB.org slash vchansu. I hope this information was useful. Thank you very much and enjoy the rest of FOSDEM.
MariaDB has had roles all the way back in 10.0 (2013). MySQL now supports roles as well, starting with 8.0. This talk will go through an overview of Roles in MariaDB and how they can be used. The talk will also highlight the differences, as well as the migration requirements, should you need to move to (or from) MariaDB. Roles in MariaDB follow the SQL Standard implementation. Roles are a very useful feature for ensuring ease of use for a DBA. Not only that, but thanks to DEFAULT ROLE, applications not previously aware of roles can make use of them transparently. In this talk we will cover the following aspects of Roles: * Creating, granting, activating. * Roles granted to roles (the role graph) * Default Roles * Information schema tables * Differences to MySQL, migration with mariadb-dump.
10.5446/52726 (DOI)
Hello, my name is Alexander Belkin, also known as Sanya. I am working for MariaDB Corporation as a developer. Today I want to tell you about union intersect and accept operation in MariaDB server. So a root of this operation could be found in mathematics. In mathematics there are three popular operations of sets. First is union. In SQL also union. The result is objects from the both sets. Then it will be intersections or intersect in a SQL word. The result is objects which belong to both sets. And difference. In SQL it's called accept. In some SQL servers it's called also minus. So the result will be objects which belong to the first set but not to the second. Also I put here a mathematical representation of the operation but you will not need it in SQL. So set according to the definition is collection of distinct objects. So this distinct operation most closely represents mathematical analysis. Here I used world database. It's standard database in MariaDB. It contains probably outdated statistics about countries, cities and languages. So let's try to find the biggest country. So country could be the biggest by population or by surface area. So we want both. And here we just order countries by population and take top five. Then union result is the same table but ordered by surface area and then taking top five. And we got eight. Records not ten. Because there were some duplications and distinct operation removed. So here pay attention that union used without distinct because distinct is default. So you can just mention it or skip it. The result will be the same. I used global order by name just to have stable results set not depending on how internally server produce results. It's good to do if you're doing some tests or something like this. But if you use union or just any query from clause usually order by is useless because after that table will be joined and it's not preserve the order. So in most times server just drop order by in this case. Also I recommend to put select in brackets if you especially if you use order by or limit clause. Why so let's change a bit this query. We edit colon population surface area and remove it brackets around last select as well as global order by. And the result is disappointing. Why just because server think that the query is this. So the last order by treated as a global order by it's historically in the server and we could not remove such interpretation because it's used somewhere by clients. But here what we doing here we just take first five population then union then with all table about countries. So it's we got all countries then order them by surface area and take first five. So it's not what we wanted. So probably this query has no sense. So definitely it's not what we want. So please pay attention to the brackets. And what about the duplicates we can easily find them by intersect query which return countries which are in both lists. You can see here. So try to find countries which are really among biggest by population but not the biggest by surface area. And you can just do it with except. All all instruct server do not remove duplicates. Okay so results should be in some terms obvious like in union it's clear what will be the result. So here we just used our first query but with all and you see duplicates are present here. So 10 records returned. Then about intersect and except all it's a bit more complicated stuff. And to show it more clear I used quite artificial examples because everything which I could invent was either huge or had very bad database schema. I don't want to show examples with bad database schema. So I think it's better this way. Also I used table value constructor selects which make query self descriptive so everything is in query. So let's take a look on intersect or I think the best representation will be matching records from both sets and if record found match it will not used for matching anymore. So we match first one here then two and then second two and or everything which we match would into the result set it's one, two and two. And there was no match or peer for three and second one and fourth and result. With except it could be represented the same way but we just removed from the first result set matched records. So here the second one has no peer in the second result set so they are good as a result. So for a long time we had only union operation and with union everything was a bit more simple. So union you can just use in any order you want if you do not mix distinct and all operation it's clear. And even if you mix them you should remember only last distinct in the sequence. Why? Because actually that last distinct turn everything previous intermediate result in distinct form. And you can see it here so I separately put intermediate result in the time of the last distinct for both examples and you see the result is the same. For the server naturally the first is better because it contains less records and the less records we operate but then it's faster. It's fit in memory or just if it's the index there it's faster for finding something in the index. And actually this optimization made automatically you do not do something to get it. It's just built in the server for a long time. But if we have a lot of operation at least three we have think about in which order they will be executed. Even to standard intersect has higher priority than union and accept and union accept has the same priority. You can think about it as mathematical operation plus, minus and multiplication so union will be addition except subtraction and intersect multiplication and the priority of this operation will be the same as analogs. But in the Oracle and in our Oracle mode also union accept and intersect has the same priority so they executed as they put in the query. But for the result really depends on order of the execution. For example here if we first make union the result will be 1, 2, 3, 4 and then intersect with 1, 3 we got 1, 3. But if according to the rules intersect will be executed first we will get 3, 4 intersect 1, 3, result will be 3 and then union with 1, 2 we get 3, 1, 2, 3. To enforce order what you need you can use brackets around union or any table operation. So here in the first example you see like additional brackets to get result what we want. And speaking about all these features where we got them so as I told union was with us for long time. It's somewhere in my school 3 probably it appears and Sinisha was authored. In 10.1 Igor made optimization so union all do not collect intermediate result in temporary table just return it because union all most time if it's not ordered do not need post processing so can just return rules to the client. In 10.3 I added except distinct, intersect distinct and default operation precedence. In 10.4 we together with Igor added support of brackets in table operation. Actually now my scale also has brackets in table operation despite the fact that they have only union operation but if you mix all and distinct probably you need it. And in 10.5 except all and intersect all was contributed by wine sia I hope the name is correct. So in 10.5 you have everything. Now to use it more efficient you put a bit no about internals. Standard way to look in internals of the server is explain extend and combined with show warnings. Actually you can use this command to get any query and to see any query rewrite it how it look like rewrite it after optimization phase. And here result is a bit cryptic but I especially for you rewrite it with good indent and highlighting some features. So here we have except in brackets and it moved at least to some query in the from clouds of some dummy select. So first select is untouched and second turn to select with all brackets in from clouds. And there's underscore underscore four is name of the table here and three is the name of the column because it's how the name column if you do not give special name for it. So you can see that it could be expensive probably it is. So if you could write query so it will be executed in chain it's better to do so. So when you execute queries in chain they will be exactly the same. And how query execute such chain actually for whole chain used only one temporary table and collect accumulate intermediate result for operation then it could be filtered with the pen of operation and then if there are global order by or something like this it will be through the select which then produce result like final result. And for union it's clear union just had records to this temporary table each select and if there is distinct we use unique constrain to just duplicate in this table and if distinct mixed with all we are doing distinct part with unique constrain and then switch it off and which allow to add any records to the table to get correct result. Except distinct is not also complex we just what should be except part just remove found row so we use unique constrain as an index to find duplicate rows and remove them. Intersect distinct is a bit more complicated it requires special hidden field where we mark rows on each pass so first select add records and second marked found records and then there is final pass which we change the filtering which filter only marked rows as a result. Then intersect except with all when they are combating one chain is even more complicated I am a bit simplified the picture because it's quite complex but in general we use yet another hidden field the duplicate counter as well as unique constrain so instead of adding two duplicates to the table we just increase this duplicate counter or decrease it in case of except operation and after that of course we need filtering pass which unfold these rows according to the duplicate counter. So that's all what I wanted to tell you thank you for your attention.
The presentation will show how and why use UNION/INTERSECT/EXCEPT. How combine them to get expected result. We will also dive into implementation details (with the help of the EXPLAIN command) to understand how MariaDB executes these set operators. This will help in understanding and troubleshooting performance problems.
10.5446/53531 (DOI)
So, usually I like to say that it's a pleasure to come back again to AGCT, which it is in a way, but it's also sad because I never imagined AGCT without Jaila Shoe. You know, usually he would be the first person I would see when I land in Marseille and that's not going to be the same thing. So, I decided to use the slides, unlike Misha, because I want, I'll show you a few pictures. One is already here, which is from his webpage. Maybe some more pictures. It's like when he was somewhat younger. This is also, you can find, this is a, this is, maybe it's not so clear, but it's a picture taken in his office, which is not far from here. And well, this picture is in this room where you can see Jail here, listening intently to you know who. And so these are the sources. I think David is looking somewhere else though. But so as, so I just want to quickly summarize his career in brief because, so as Misha mentioned that he obtained his doctorate, which is I think a little higher than PhD, Dr. Adeta from Paris 7 in the year 1979 with Godma. And that was the title of his thesis, spectral analysis and analytic continuation, Eisenstein series data functions and solutions of Diefenstein equation. So you see many of the themes recurring in his later years are already there. I was just looking up and interestingly, it was in the year 1979 that three students got their degree from Godma. And one of them I think is here. So Jill was one of them and seems Francois Rodier and Christopher Soulet. And I think these were the last batch of students of Roger Godma. So Godma himself is extremely interesting, but I'm not going to talk about him today. Anyway, I don't know him as well, obviously. So just to continue with this again very briefly. So I suppose this is probably a price he got for his thesis. And he has for most of his professional life, he held a position with CNRS, maybe initially in Paris and then in Nice for a little while. And then for most part, he has been at Marseille. And he was the second director of this Institute of Mathematics Meeting, SIRM. And he was responsible for the years September 1986 to August 1991. You can find this on the web page of SIRM. And he was also the person who really laid the foundations of the library that you see here. And he was the director of this Institute of Mathematics of Lumine. Or as you can see, a long time, and those of us who have had some small amount of administrative experience can admire being in that position for something like 11 years. As he writes in one of his articles, he led with tact and intelligence. And he is, as Misha also mentioned, was the founder of this meeting which go back about 32 years and has had many students. Some of them are here, many also sort of postdoc habilitation, if you like, and had the pleasure of attending a conference for the 60th birthday called Saga Symposium on Algebra, Exhometry and Applications, which was held in Tahiti in 2007, whose proceedings appeared a year later by World Scientific. And these days, we have access to internet and so on and so forth, which once upon a time was only imagined, I suppose. And it is possible to learn about what somebody is doing in a few clicks. You can say go to Maths-Synet and try to find out what they have been up to. I sometimes call it looking up the horoscope. And of course, it tells you something, not everything. Anyway, I did that. So here is the Maths-Synet, if you like. It has some numbers and he is a very respectable number of papers with a large number of citations. I just want to sort of, what is interesting here is the range of subjects that appear here. You know, of course, Algebra, Exhometry, number three is prominent information and communications, according would come here. But also several complex variables, partial differential equations, topological groups, manifolds and cell complexes, and of course history. So these days, it's not common to find this kind of a range in people. And OK, so here are his collaborators. I guess I feature among them. Anyway, so one, although we are into counting one thing or another, one doesn't always just count a number. But so some major themes of his work, I would say, could be sort of divided. And I just put some catch words and some representative papers of his. So his thesis had to do with automatic forms and resulted in rather substantial paper. This major symposium, these proceedings of symposia and pure maths and a long paper in Invencioni and another one in Invencioni. And what is kind of interesting is, well, this is only roughly speaking in automatic forms, but he sort of returns to it. And this is his last published paper here, which is, in fact, in the proceedings of the previous AGCT. And then I will sort of divide his other work. He worked a lot on curves and abelian varieties over finite field, starting with this note in the French Academy and, of course, Jacobians and abelian varieties. This was subject of one of his talks here, where he showed us a lot of explicit calculation that some of us may remember. And I would also mention work with Christoph Riesenthaler, who I don't know what he will speak about. But anyway, so these are some of the papers which you could put broadly in the areas of curves and abelian varieties and related objects. And I sort of separate that from algebraic varieties, which roughly corresponds to higher dimensional algebraic varieties, say surfaces upwards. And those that are not necessarily irreducible, if you like, algebraic sets over finite field and starts with the paper that Misha mentioned, this 60-page paper in Crele. And then maybe I put the paper that we wrote together. And a much more recent work with Robert Roland, who is also here, which had to do with number of points of algebraic sets over finite field. And I'll say something about the last two in a moment from now. He held, I mean, for a long time, I think this was a topic which was close to his heart, continued factions, sales, and client polyhedral. And although he wrote few papers on it, he often told me that he had plans to write a book. Unfortunately, I think that has not seen the light of the day, if I'm not mistaken. Maybe there is a manuscript that some people know or can revive. But I remember Jill mentioning many times that he's working on a book on this. And then, of course, he more or less led the interest in French mathematics on coding theory, especially aspects coming from algebraic geometry after the invention of the so-called Gopah quotes. So he wrote up the, he gave this seminar, Burbaki. As you can see, very shortly after Gopah's papers appeared, and this was among the first publications, I think, in Europe on that topic. I suppose another of his contribution to coding theory is to bring Misha Fassman and Serge Bladout here in France, which is, of course, not listed in the publications. But one of his close collaborators and friends, and who, unfortunately, is also no more, I remember him quite fondly, Jack Wolfman. And with him, he wrote a few papers. In fact, I think this paper is very, very highly cited. Maybe it has more than 90 citations and so on. People, I mean, you can sort of see the review and the coding theorists are thanking Jill for the signal service he's doing to the community by making the related mathematics much more accessible and so on and so forth. This is a relatively simple paper, but has had a considerable impact, which is really sort of translating things into coding theory that he had just learned. I suppose I will come back to this work. And here are some more papers, again, sort of forming a bridge between counting points of varieties over finite field and applications to linear codes, as well as things coming from number theory, exponential sums, and Einstein series, and so on and so forth. So, OK, these are a couple of papers, again, that I wrote with him. And this last paper is kind of interesting. It grew out of a workshop that was held not too long ago in Los Angeles, where we were expected to assemble in a group of six. And people don't lecture to each other, but sort of take up a problem and just discuss. And that's the only thing you do for a week. And we were very apprehensive of that experiment, but it actually turned out to be rather productive. And I suppose I wrote my paper with maximum number of co-authors and probably so did Jill. So this is the sixth author paper that we have in the, in fact, a volume of Springer for women in mathematics. Also happy to have something there. OK, so anyway, that's just a summary. I want to maybe focus on maybe just give you some sampling of the work. And I have chosen this, and there is a reason why I have displayed the cover page of the paper I wrote with Jill several years ago. And that's because, you know, obviously, you know, I come from India. And this is a paper which appeared in a special volume of Moscow Magirnal for the morning 65th birthday. But you see here a quote is written. And this is for those who may not be aware, this is written in the Devna Grip. And this is actually a quote from Rig Veda. What it says is, it says, Tirashchino Vithato Reshmi Resham. What that means, it's translated over here, probably you cannot read. And it roughly translates to saying their chord was extended across. And anybody who kind of sees this and sees some Indian name would think that I would be responsible for putting that. But that is not true. It was Jill who came up with that, you know, and he sort of insisted that we have this quote from Rig Veda, which is one of the ancient texts, you would say. And it sort of has multiple meaning. What that alludes to their chord was extended across is, you know, you sort of think of scientific or if you like mathematical thought as some continuous ether which is being passed from one generation to another. And like we build on the work of predecessors, and you know, of course, one is reminded of Newton and so on. And you kind of extending the work of older masters. And sorry. But there was a dual meaning to it, the chord was extended across. As I will mention, this paper also uses a lot of Bhattini theorem. So it involves taking, you know, sections and so on and so forth. So there is also that meaning. So I just wanted to show that and mention that really I had nothing to do with that quote except to typeset it in Dev Nagari script here. But okay. So coming back to the contents, maybe, you know, I should show some theorem. So before that, perhaps, I should give some background. So let us, and everybody here, of course, knows about the way conjectures and so on. But there was a time when there were conjectures and not theorems. And at that time, one of the sort of basic estimate one had for, you know, counting the number of points of just an arbitrary projective variety or you have similar things for a fine variety over a finite field is the so-called Languill inequality. And what that says is that the number of a curational point of a n-dimensional variety in the projective space of a given degree D differs from the number of points of n-dimensional projective space by q to the power, by a sort of order of q to the power n minus half. And then you have this D minus 1, D minus 2. So of course, if n is equal to 1, this is probably as good as you can do because then you have square root of q here. But in general, you would, this is a pretty bad bound. But that's maybe the best you can do in that generality plus some constant times q to the power n minus 1. Where this constant is something which depends on the dimension of the variety, dimension of the ambient projective space and the degree. In other words, it is independent of q. So that's the sort of the point here. And of course, if your variety is nice, then you can get much, much better estimate than n minus half. And this is one of the first corollaries when Deline proved his Riemann hypothesis. So there are two consequences in that paper. One is this, another is to do with the Ramanujan-Petersen conjecture. And that says that suppose you had a smooth complete intersection in a projective space of dimension little n. So now I'm in capital N dimensional space. When I'm talking about complete intersection, that basically means it's defined by the right number of equations. Let's say R equations, where R is the co-dimension. Then the number of a curational points of x differs from the number of points of n dimensional projective space, which is here for your convenience, by a factor of q to the power n over 2. So obviously, if n is 1, then q to the power n minus half and n over 2 are the same. But as soon as n grows, this is much worse and this is far, far better. Where B prime n is essentially really an explicit quantity. It is the so-called primitive better number. As you know, for a complete intersection, it's essentially only the middle homology, which would matter. And you have to adjust it by one, depending on whether n is even or odd. And then you get this B prime n. And this is something which one knows explicitly, because you see, when you have a complete intersection and you have the right number of equations, then the degrees are uniquely determined. So you consider the so-called multi-degree. So here is one of the explicit formula for the primitive better numbers. So that suppose the multi-degree is d1 to dr, that means this is cut out by R homogeneous polynomials of degree d1 to dr. Then this primitive better number is sort of given by this formula. And I think one of the first place where you can find this is in the work of Hitzlbrug. Although I don't think he worked over finite fields. But there are other references. So that's sort of point I'm trying to make. If you know your capital N, you know the multi-degree, and you know the dimension, then this is something you can effectively figure out. So this is, and of course, if n is equal to 1, this will be estimate, which will be like 2g, which is better than this generally. OK. So those are two classical results from the 50s and the 70s that are motivations for what we did in this paper, and maybe I'll just state the theorem. Roughly speaking, there are three theorems in this paper, and I'll try to spend one slide each of them. So the first theorem is a generalization if you like of this in the case of complete intersection, except that we don't require it to be smooth. So you take a complete intersection of dimension in, cut out by the right number of equations, you have the multi-degree, and maybe you have some control on the dimension of the singular locus. So let's say s is the dimension of the singular locus, or in general s will be something bigger than or equal to the dimension of the singular locus. Then this inequality says that you have something like the primitive Bette number. By the way, this is the primitive Bette number of a smooth complete intersection. So this is still that same explicit quantity, not with n, but n minus s minus 1, and then the power of q you would get here can in principle be better than n minus half depending on how singular it is or how smooth it is. So it is actually q to the power n plus s plus 2, sorry, n plus s plus 1 over 2 plus some constant, which is also effective into q to the power n plus s over 2. And this constant is actually 0 if your variety were smooth, which would mean that you could take s to be negative 1. The singular locus is empty, so that dimension is minus 1. And you can see that if you substitute minus 1 for s, you just get back the Linsen equality. Of course, we are not saying that is a corollary because that is used in the proof. So that's kind of cheating. But another important thing is that the constant was sort of made effective. This is not a very efficient bound, but it's still an explicit bound. Delta here is the maximum of the multi-degrees. And another, if you like, corollaries could be that if suppose your complete intersection happens to be normal, which would mean it has no singularities in co-dimension 1. So you could take, for instance, s is equal to n minus 2. And if you put that in, then you will get 1 here. So it's sort of the first Betty number. And if you just work it out, you will see that you actually get Languay also as a corollary. So you can think of this as a common generalization, at least for normal, complete intersections. Of Languay and Deline. So that was something that we proved. And one can use it to recover some of the older results and so on. I mean, Hullian cards had an estimate like that without the explicit constant that are appearing here. So this is really, OK. So that's theorem 1. And what is theorem 2? Theorem 2, where we revisit Languay so that is this inequality. So what Languay says is that this constant is independent of q. But they don't say how large or how big it can be. So what we sort of do here is to simply to kind of make it effective. And we say that suppose you had some, typically in practice when you are working on a variety, you are given, it's defined by a bunch of polynomials and you have some control over the degrees of those polynomials. I'm not necessarily saying that those number of equations is the right number. So well, there are some m equations of degrees d1 to dm. Then what we showed is that this constant here could be bounded by some explicit quantity like that. And you also have a similar thing for a affine variety. Darkness, I mean, it doesn't have to be projective. And because you have something explicit here, you could also look at the other side and get a lower bound for some type of varieties. For example, one can kind of get an analogous result to a classical upper bound for the number of points of hypersurface over fq. So that's sort of theorem 2. And then theorem 3, I'm just going to describe it in a superficial way partly because it is technical and partly since I'm not so much thinking about it right now. So what it is to do is basically, when Lang and Well wrote their famous paper, they kind of, you have to remember this is before the trace formula and things like that. So they have this inequality and they said that, okay, if you had an algebraic function field of dimension n over finite field fq, then there is a constant gamma for which this inequality holds with d minus 1, d minus 2 replaced by gamma. You should think of gamma like 2 times the genus if you were in the case of a curve. So what should be this gamma? And so they said that this gamma should be a birational invariant and it should be related to the Picard variety. And they make various more precise what they call conjectural statements about the zeta function and the characteristic polynomial of this Picard variety. And in effect, what we showed in this paper is that this conjecture, because when you are sort of over arbitrary, it's not clear what is the right Picard variety to choose and so on. So if you take the correct Picard variety, then we showed that the conjecture is actually true. So these are roughly three theorems that were there in the paper. How much time do I have? Okay. So I just want to say something about the proof because this is something which went on for several years. And as the quotation indicated, we used a lot of Bertini and we used some kind of suitable generalization of Weak Lepcher's theorem as you can already imagine when you are taking hyperplane sections. But we wanted that for singular varieties, so we did that. And of course, we used the trace formula and not just well one, but well two and some estimates for Betty numbers and a little bit of complex analysis and combinatorics. And so it was quite satisfying and especially what happened later on was something which took me completely by surprise. Because we spent some time trying to understand and we wrote it down and so on. And then people have over the years have found applications that we never imagined. For example, one of the places where the result got applied is in finite group theory. People trying to solve characterization of some kind of finite solvable groups by identities which are analogous to this Engels identities for nilpotent groups. And this very nice piece of work by all these mathematicians, I think 2006 was in Composito or for Warding's problem in function field, Francois Audier and more recently, student of Janowa and so on. They have used it for Boolean functions or APN functions. People have used it in finite geometry for hyperovols in projective planes over finite field or some kind of arithmetic progressions and so on and so forth. And in fact, frankly, some of these applications I don't even understand how they find it useful. Especially, you know, thing about group theory is rather mysterious how some estimates like that could be useful in a discrete problem. Anyway, but the result also has been extended. I mean, I mentioned already those estimates were not the sharpest possible. So people have tried to look at somewhat more refined estimates. And I mentioned especially these people from Argentina, Antonio Caffure and Jelermo Matera who wrote a series of papers which kind of generalized or extended some of these inequalities. Okay, so this is one thing. I wanted to talk about another thing. And you see, in the same paper we wrote a section, the purpose of which was to actually put down on paper something that Jill had already observed many years ago, but had not bothered to, you know, write a paper about it. And so there is a conjecture there which is really due to Jill and what it is is the following. But suppose now you have a complete intersection which is not necessarily irreducible even. And let's say it's, you know the dimension, you know the degree, then the number of f curational points is bounded above by this quantity. Okay, remember PN is still the number of points of n dimensional projective space. And I'll just, maybe this is clearer if I give you an example. This example is you think of a hyper surface or in other words just take a homogeneous polynomial of degree D in m plus 1 variables and look at its zeroes in the projective m space over fq. So that means what? That means this n will be equal to m minus 1. So if you substitute n equal to m minus 1, what you get is a very familiar inequality that the number of points of a hyper surface of degree D, I'm sorry this should be m, should be bounded above by dq to the m minus 1 plus Pm minus 2. And this was, you know, once a conjecture made by Misha Fassman and rather quickly proved by Jean-Pierre Seire in a letter to him in July 1989 if I remember correctly. And so you can sort of think of this as a generalization of this not just to hyper surfaces but complete intersection where you have some control over the dimension and the degree. And some years later Jill's prediction came true and in fact those of you who have been attending AGCT would remember that Alan Koura in fact proved this conjecture in the affirmative and actually proved a much more general result where he actually looks at an arbitrary variety and look at its, you know, the degrees and dimensions of its irreducible components and has an inequality which already in the equidimensional case would sort of boil down to this. And in fact Jill and Robert Rolla also wrote an ice paper not too long ago which is in the journal of pure and applied algebra. Now you see this, so this is probably was this conjecture was inspired by this inequality and it seems there were two inspirations. So Jill was inspired to make this conjecture and Misha on the other hand who had originally proposed this inequality for a hyper surface together with his student also student of Root Pelikan who spoke this morning Boguslowski they made another generalization or another extension. So to explain that let me just introduce some notation. So I just want to spend one slide on this if I may and then I'm more or less done. So you look at the maximum number of FQ rational point that are linearly independent homogeneous polynomial all of same degree D can have over FQ. Okay, so I denoted by E sub R D and the conjecture of Fassman and Boguslowski is that this is explicitly given by this formula and where are there are some new eyes and what are these new eyes what you do is you sort of look at M plus 1 tuples indexing sets of monomials of degree D and M plus 1 variables arrange them lexicographically look at the R tuple in that and that sort of determines this quantity. It's a fascinating conjecture and I myself was quite fascinated with this thing and if you maybe I am running out of time but suppose you were to take R is equal to 1 you will see that it boils down to sets inequality and if R equal and generally if R is less equal M then you get this thing you can just decode to that very good. So what do we know about this conjecture and so this is maybe my last mathematical slide because I say this because this is partly related to the paper of Jill on the parameters of read Miller quote and this was something that Jill was somehow fond of saying that he said well look Misha and I made some conjectures around the same time mine turned out to be right Misha is not quite so but you know Misha's conjecture also gave rise to a lot of mathematics. So R equal to 1 I already mentioned R equal to 2 was settled by Boguslowski and several years later my former student Madatta and I we showed using a work of Zanella that it is actually true also if R is less than or equal to M plus 1 provided you are looking at quadrics but more surprisingly we showed that it can be false if the number of equations is more than the number of variables and eventually we showed that the positive results true for the number of equations less equal number of variables is true in general and then but then since it is false sometimes we thought we should come up with a new guess and we proposed a new conjectural formula that I like to call the incomplete conjecture because it is not you know because you have R linearly independent polynomials homogeneous of degree d so the R can be maximum this M plus d choose d it does not go all the way but goes here and this incomplete conjecture was established for example not just M plus 1 but M plus 2 choose 2 and with Peter Bell and now we have proposed what we like to call a complete conjecture which also takes care of all the values but it is still a still a open thing so I am sort of indicating some flowering not necessarily the work of Jill so I will just stop with my formal concluding remarks I thought I will forget a few things but so I just wrote it down so I hope I have made it clear that Jill has made important and lasting contributions to mathematics especially in the study of algebraic varieties over finite fields linear codes and so on and his interest and his knowledge at least my impression of me is were quite deep and wide and when he became interested in some topics he would it was his tendency to you know sort of look deeper and not be in a rush to publish and as people have mentioned already besides his contribution to mathematics he was an institution builder he helped nurture and great institutes like this SIRM where we are sitting or standing and continuing success of these AGCT conferences which by the way started as algebraic geometry and coding theory some years later became arithmetic geometry and coding theory and now arithmetic geometry cryptography and coding theory so you just think of C as an idempotent maybe and it's really something which owes largely to his vision and efforts and besides this scientific institutes and conferences some of you may know that Jill was the president of the French pavilion at Auroville which is in Pondicherry in India and you know he had he had read or at least browsed through I have seen this significant amount of ancient and modern Sanskrit works including Vedas several of the Upanishads and the scholarly books of Sri Aurobindo some of you may know about this and I should tell you that if you sort of talk to an average Indian in fact 99% Indians would not have read these things okay so so that's something really you know remarkable it's not just because you're born in India you you know you study all these things and anyway I sort of wanted to say that above all he was a wonderful human being it's always warm generous you know very kind of a person who was willing to help others and you know more or less go out of the way so you know personally it has been a pleasure I think an honor to have known him and he'll certainly be missed thank you very much.
I will give an account of some aspects of the mathematical work of Gilles Lachaud, especially the work in which I was associated with him. This will be mixed with some personal reminiscences.
10.5446/53532 (DOI)
Thank you, David, for the introduction. And thank you to the organizers for having me and for organizing such a great workshop. I've really enjoyed seeing all the great talks on a broad range of topics. And hopefully you'll find this talk interesting too. So everything I'm talking about today is joint work with Abby Bourdon, Oslam Eider, Yuan Lu, and Frances Odumodu. And these are a lot of names and longer names. So all the results from this paper, I will abbreviate like that. This is not a mathematician. This is our names. So maybe our version of Borbaki. OK, so let me begin. I'm just going to fix a curve. And I'll take it over a Q for simplicity. So the things that I'm going to state do work over an arbitrary number field. But there will be other number fields floating around. So it'll be just better to fix it to be over Q. So let me begin with landmark result in arithmetic geometry, which I'm sure many of you know. So this is Fulton's proof of the Mordell conjecture, which says that if you have F a number field, then the set of points, the F rational points on C, this is infinite only if the genus is at most one. OK, so this gives a very strong result and brought us the motivation of some of our talks yesterday that for higher genus curves, then you're immediately faced with this problem. I know it's not the way it's normally stated, but it's going to mirror the other things. OK, so yeah, this brings us to the immediate question of then if you have a higher genus curve to try to find all of the F rational points. But in this talk, I want to go in a different direction. So what if you don't care about fixing the particular number field, but you just want the points to live in an extension of bounded degree? So what if I want to look at the union over all points over F, where this is, let's say, equal to D, so the set of degree D points? So when can this be infinite? And we know that you can't get this no condition on the genus will control the finiteness of that set. So if you take a hyperliptic curve, no matter how large the degree of F is, so no matter how big the genus is, then you have infinitely many degree two points. And you can easily see how to construct them. I just pick a rational number for x, and then I solve for y, and that will always be a point of degree two. So you can find a curve with arbitrarily high genus, where you always have infinitely many degree two points. This isn't the only way you can get infinitely many degree points. So let's call that example one. So example two, you could take C to be a double cover of an elliptic curve E, where E has positive rank. So then maybe it's not as nice to write down a general form of the equations for C, but you still see for any of the infinitely many rational points on E, I take the pre-image and it's a point of degree at most two, and you know that there's only finally many points of degree one, so there have to be infinitely many of degree two. OK, and Harris and Silverman proved that this is the only way a curve has infinitely many degree two points. So if you have infinitely many degree two curves, then a degree two points, then you are a double cover of P1 or a positive rank elliptic curve. But this is not true in general. So Debar and Fallouy showed that this is false for a higher degree. So they exhibited a curve that had infinitely many degree D points, but no map of degree at most D to P1 or a positive rank elliptic curve. You see it? Yes? I think that's right, but I don't remember off the top of my head. Yeah, there are, I forget whether it's degree seven or genus seven, but there's an extension of these results by Abramovich and Silverman in degree three, maybe higher, I would have to look to check. Yeah, so you can go a little bit higher, but not all the way up. OK, so what is the condition for having infinitely many degree D points? Well, it's not as nice as what's in the Mordell conjecture, it's just not just in terms of a single geometric invariant. It turns out there are two things that control it, and this is a corollary of faultings two. So about rational points on sub varieties of the Bialyne varieties. So you have that C has infinitely many degree D points, only if one of the following conditions holds. So one mimics what we have up there, that there exists a map defined over Q from C to P1 of degree D. The second condition is what replaces the double cover map to a positive rank elliptic curve. So is that there is a positive rank sub-Bialyne variety, A contained in the Jacobian, contained in pick zero such that if I translate A by a degree D point, that this lives inside the locus, which is usually denoted WD, and this is the locus of effective divisors. Effective degree D divisors. Okay. Wait, which question? What is x? Oh, for some degree D point. So just some translate of a positive ranks of the Bialyne variety that lives inside the locus of effective divisors. So the condition is this containment, not that it lives inside pick D, but that this translate of a positive ranks of Bialyne variety is contained inside the locus of effective divisors. Did that answer your question? Okay. So you can imagine that what's happening in the degree two case is really an accident. It's just that the Jacobian is small enough to mention that the only way that you have a positive ranks of a Bialyne variety contained in this locus of degree two divisors is when you actually have a map to a positive rank elliptic curve. But in general, you don't always have the map. The condition is really that you have this positive ranks of a Bialyne variety. Okay. It's not as nice, but it's what's true. So that's what we have. Okay. So motivated by this theorem, I'm going to make the following definition. So I'm going to let X and C be a point of degree D. Then we'll say that X is P1 parameterized if there exists a map F defined over Q. Again, if your curve is not over Q, then you just take this over the field of definition. From C to P1 such that the image of F is actually a Q point on P1. So this mimics example one, the way we get infinitely many degree D points is we fix a Q rational point on the base and then look at the pre-image. Another way that you can think of this is that if I map the point down and then take the, oh sorry, if this is true, there exists of degree D. So then once you have the condition that the map is degree D, the condition that F lands inside of Q is equivalent to the condition that when you map it forward and take the full pre-image, this is just X itself. There's no other points living that also map to F. That's another way to think about it. I'll say it's P1 parameterized if this holds and P1 isolated otherwise. If it's P1 isolated, then the divisor doesn't move in a family parameterized by P1. So the second condition, as I'll say the X is AV parameterized. If there exists a positive rank sub-abelian variety, a positive rank over Q, sub-abelian variety such that the translate lies inside this WD, the locus of effective divisors. Now this X is the X that I'm defining this property for. We'll say that it's AV isolated otherwise. Then three, I'll say the X is isolated. If it is AV isolated and PV isolated, sorry, MP1 isolated. So I think you can see from looking at the statement of Falking's second theorem that these conditions mimic in there how you get infinitely many degree D points. Applying Falking's theorem repeatedly with some additional work. So maybe I'll say Falking's plus. You have to do a little bit extra work and I believe this is not written down anywhere else. So if you want to find the details, you can look in our paper. So one is that a curve has infinitely many degree D points. This is if and only if there is a degree D point. That is not isolated. So you just have to do a little bit extra work to say that if you have a degree D point, that's not isolated. You actually get infinitely many degree D points and not degree at most D. That's the only extra work you have to do that doesn't make it immediate. And two, here this is Falking's plus and Riemann-Rach, that if you have any curve and you look at all of the algebraic points on it. This is a definition you make for an algebraic point of arbitrary degree. So we're not fixing the degree. But even if you don't fix the degree, any curve has only finitely many isolated points. So all of the algebraic points on C come in infinite families except for isolated points. And I think from this, as much as we understand positive rank subabillion varieties of Jacobians, we understand the parameterized points. We sort of know where they come from. They come from these other well-behaved arithmetic things. These isolated points. Right, yeah, and that comes from Riemann-Rach. Once your degree is high enough and you know it moves in a family parameterized by P1. So it's not surprising. It just, I think, shows really what we're getting at with this definition. OK, so that's what you get for a single curve. And sort of if you don't know anything else about the curve, that's the best that you can say. I mean, you can probably rig a curve, probably a pie genus, but with as many isolated points as you want, just like you can rig a curve with lots of rational points, just by passing it through doing polynomial interpolation. OK, but often we care about curves that arise in families. We don't care about sort of this curve. A lot of the curves that we saw as examples in talks were curves that arose from some moduli problem, and those naturally arise in some family. So let's do an example. So for example, we can take part of the family that appeared in Drew's talk. So we can take the modular curve x1 of n, which is a compactification of the curve y1 of n, which parameterizes isomorphism classes of pairs of elliptic curves to be a point of order n, up to isomorphism. So Andrew went through this in detail in his talk. So here I just care about that if you have a point on the elliptic curve of order n, it gives you a point on this y1 of n. And the field of definition of this point is the size of the Galois orbit up to isomorphism. OK, so. And when you have these families, I mean points of order n give you points of lower degree. So if you have two integers that divide each other, then we have a map from x1 of n to x1 of n that just sends the pair E p to E n over mp. OK, so we have this family for all integers n, and they sit in this tower. So at the bottom, we have the j line. And then we have x1 of 2, x1 of 3, x1 of 5. Sort of all the primes down here are mapping to it. And then we can go up x1 of 6, x1 of 4, and so on. So we have this additional structure. I mean, we want to know any of the curves in this family. We're interested in the points on them, but we're also interested in how they fit together. So motivated by this problem, we want to study how these isolated points behave under morphisms. OK, so I have a map of curves, and it's going to be defined over q. So C and D are curves over q. And I'm going to fix an algebraic point on C and let y be f of x. So if the degree growth of these points is as large as possible, so if the degree of x is equal to the degree of f times the degree of y. So another way to say this is when you map down and take the pre-image, you just get x back. You don't get any other points. Then these properties of being isolated push down the morphism. So then if x is p1 isolated, then you get that y is p1 isolated. If x is av isolated, then the same is true for y. And so then this gives you that if x is isolated, then y is isolated as well. OK, so that's our results for general curves, not thinking about them having a moduli interpretation or arising in a big family, just a morphism from one curve to another. So now I want to think about applying this result to the modular curves. So let's first review what we know about points on modular curves, particularly these modular curves, and then we'll go to the next one of them. I'll leave this here since I didn't write up the prep. OK, so probably the landmark result for understanding points on x1 of n is Morrell's uniform badness theorem. So if you fix a positive integer d, then there is a bound b that depends only on d, such that for all number fields of degree d and all elliptic curves e over k, if you look at the torsion on e, it's always less than or equal to this bound. So the size of the torsion is bounded depending only on the degree of the field of definition of the elliptic curve. OK, so now let's take this and translate it into the objects that we're seeing. So this is about elliptic curves and torsion points, so what you can see from the definition of y1 of n, that is going to be related to having points on a modular curve. So how do I translate this? So an equivalent formulation of this, so the theorem is that the statement holds. I think you guys know that. And then I'm going to just translate it. So everything is equivalent, but they're also all true. So there is a bound. Now c, that depends only on d, such that for all integers greater than c of d, the modular curve x1 of n has no non-cuspital degree d points. OK, so as I go high up this tower, the degree of the points has to grow. If you stick with a bounded degree, there are no points at that degree high enough up, at least if you stay away from the cusps. OK, so that's just talking about points of a particular degree, but here in this talk, I want to focus on these mysterious points, the ones that I'm calling isolated. So what does this say? So this condition about isolated points is implied by Morel's theorem. And you can go back using work of Fry and Abramovich. So it's two separate papers. I want to fry. And what this says is that as n goes to infinity, if you look at the non-cuspital points, so let's just look on y1 of n. If you look at only the isolated points and take the minimum of the degrees, sorry, degree of x, then this also goes to infinity. So Morel's theorem is a great theorem. It tells us so much we have no comparable theorem like this for other objects, even though that would be awesome if we don't know it. But yet, it still does not rule out the existence as these isolated points. All it tells us is that as they go up, they're even higher and higher degree. So in some sense, it's even more and more hopeless to find them. OK. OK. So for the rest of the talk, what I want to do is study non-cuspital isolated points on the tower x1 of n. OK. So first, let's look at cm points. So by prior work of Clark Cook Rice and Stankiewicz, which was then extended by Sutherland, so we know that if n is sufficiently large, so I really mean for every n bigger than some bound, then there are cm isolated points on x1 of n. OK. They didn't phrase it in this way, but what you get, and actually you get an even stronger version of isolated, you get a point, the so-called sporadic point. So we say that x and c is sporadic if there are only finitely many points y and c with the degree of y less than or equal to the degree of x. So not only the point doesn't come in an infinite family, but there are no infinite families of that degree or less. So in fact, the points are sporadic. OK. So the first four authors are the case when n is prime, and then Sutherland extended their argument to composite n. OK. And this should fit in with our intuition. I mean, we know cm points. They give us points on modular curves that are of lower degree than they expect. They don't come in an infinite family. I mean, it's amazing that you can really get them on every single n once n is sufficiently large, but the fact that they are these kind of mysterious points, that you should just think of that part as fitting your intuition of what. How cm points behave and what kinds of points we would expect to be mysterious. OK. So on every n, you get them. So what we show is that also when you get them, they sort of propagate up the tower. So if e has cm by an order in k, so k is just the cm field for e, then for all sufficiently large primes that splits in this field, there exists an x and x1 of l, whose j invariant is the same as the elliptic curve you started with. That is sporadic, so isolated in this extreme sense. And moreover, every preimage of it higher up in the tower is also sporadic. So if you have a y and x1 of nl that maps to x, then y is sporadic as well. So not only do you get a sporadic point there, but that sporadic point then propagates all the way up the tower. OK. So cm points really want to be sporadic points, and then give you sporadic points in a lot of different ways any measure that you could think of. OK. So now let's focus on sporadic points corresponding to non-cm points. OK. So I'm going to let e be a non-cm elliptic curve over a number field f over a. And I'm going to let little m be 6 times the product of all of the primes greater than 3, where the l-addict-galois representation is not surjective. OK. So since e is not cm, this will be a finite set of primes. And I'm going to let capital M be the level of the little m-addict-galois representation. So ie m is the minimal integer such that the image of the m-addict-galois representation for e is just the complete preimage of the capital M representation. So then for that value of m, so then given an x and x1 of m, let y be the image of x when you map down, let's say under, the map x1 of n to x1 of the GCD of n and m. So in particular, this integer is always bounded by m. So we go from arbitrarily high up in the tower to something just bounded on the level of the scalar representation. And then when you have that, under this map, being isolated pushes down. So if x is p1 isolated, then y is p1 isolated. If x is av isolated, then the same is true for y. And also, so I forgot to put this in the theorem because I hadn't defined it yet, if x is sporadic, you also push that down. OK, so these Morrell's theorem doesn't rule out the existence of these sporadic or isolated points high up in the tower. It just says their degree has to grow big. So this is sort of the opposite. It says, OK, they might live high up in the tower. But if the J invariant is small, which could happen when you're mapping down that the J invariant gets a lot bigger, then when you look at this level of the M-atical-Warp representation, you can push that isolated point all the way down in the tower. So even if they live high up in the tower, they somehow started life somewhere low down in the tower where we can hope to start to understand them. OK. OK, so I think you can, at least from the conclusion of these theorems, you can see that they're related. Basically, what we show is that when you have this condition on the Galois representation, then you have the maximal degree growth of the points that we need over here. OK, and that's not too surprising. So maybe the thing that's not obvious is, OK, if you took this theorem and you had worked a lot with Galois representations of elliptic curves, then probably you could figure out how to get the statement when M is the level of the idyllic Galois representation. So if you remove this M and you just say, OK, the whole Galois representation is the pullback from the M-addict part. That's basically because the Galois action will be as big as possible, so you expect the points to stay around. So why did we put in the work to get the level of the M-addict Galois representation instead of the idyllic Galois representation? So if you look at elliptic curves, even over a fixed number field, so say over Q, the level of the idyllic Galois representation, so the minimal M, non-CM elliptic curves, minimal M such that the image is the pre-image of the mod M part, this can be arbitrarily large. And what happens is that you can have entanglements between the different ellatic Galois representation. So even for a series of curves, even for where the ellatic Galois representation is surjective for every L, what you can get is that this M is twice a prime and that other prime can be arbitrarily large. So but for a fixed integer little m, the level of the little M-addict Galois representation can be uniformly bounded in the strong sense of Morel, depending only on the field of definition of the elliptic curve. OK, so if you fix this little M, then the capital M that you need to take is uniformly bounded. So that is not true if you remove the little M. So let's look how we get little m. Well, it's 6. That's uniformly bounded. And then it's the product of primes where the Galois representation is not surjective. So that's not known. That was raised as a question by Sarah for elliptic curves over Q, but then in our collective dreaming, you can find this question stated, OK, what if it's bounded? Not just what if these primes are bounded for elliptic curves over number fields, maybe just in terms of the number field or just in terms of the degree. So that is. So if you had a positive answer to that question, too, a generalization of Sarah's question, which is about a uniform bound on non-surjective primes. So if you had a positive answer to this question, then I would tell you in our theorem that this capital M can be uniformly bounded depending on, say, either the field where you take Q adjoining the J invariant or if you're really optimistic slash greedy, depending only on the degree of this field. OK. So I mean, in general, this question for sort of arbitrary number fields, I think it's a question on math overflow, but I don't think it's a question in print anywhere else, except we repeated it in our paper to do this remark. But I don't think there's enough evidence one way or another to say over other number fields. But over Q, there is quite a lot of evidence. And Drew Sutherland and David Zouina have both conjectured this in paper with their names attached. That there is a bound. So let's see what this theorem gets you, or really the proof of the theorem, gets you when you apply it to elliptic curves over Q, where it tells us these sporadic points should come from. Question? OK. OK. So let me just repeat what I said, which is that over Q. There is substantial evidence for such a uniform bound. And it has been conjectured formally by Sutherland and Zouina independently in papers where they have a lot of evidence for this conjecture. OK. So for your question, which then implies a bound on this. So for the question. Yes, for the question. Am I right in calling it a question and not your conjecture? OK. So given this evidence, you could hope to maybe within our reasonable time frame try to understand where all the sporadic points on x1 of n come from, whose J invariant is Q. So let x1 of n be a non-cuspital non-CM isolated point with rational J invariant. OK. OK. So I'm getting short on time. So we're going to have three conditions on Galois representations, one which will include this conjecture, and then two other ones. And I will write them down. But first, I just want to state the conclusion. So if you have 1, 2, 3, which are assumptions on possible Galois representations of elliptic curves over Q, then you know that there exists a divisor of the form 2 to the a, 3 to the b, p to the c of n, such that the image of x in this modular curve of low level is isolated. n. What do we have here? So p to the c is less than or equal to 169, and then this is a prime power of r1. And a, you come bound on something that depends on p, and b also depends on p, and the max. So there's only finally many primes in this range. So what you get is that this is 15, and the max of b is 8. But these maximums are really achieved for different primes. So something dividing 2 to the 15 times 3 to the 8 p to the c is always a bound, but you can actually get slightly better for different values of primes. OK, so if these assumptions hold, which I will end with, then you get this fairly short given sort of the possibilities list of levels where the isolated points sort of begin life. And they might continue high up the tower. So for the cm points, they definitely go high up the tower. We don't know about non-cm points if they continue to propagate, but you at least can detect their existence low down in the tower. So maybe let me just write quickly these assumptions, and then I'll end there. So the first one is that for all elliptic curves, that's because the assumption L for all elliptic curves, e over q with this j invariant. OK, really you don't need the assumption for all. It's enough to check it for one, but that's OK. OK, so here I want that the L at a Galois representation is surjective for all L for all primes greater than 17 and not equal to 37. And then I want something about being not contained in the normalizer of a non-split carton. So here I want that there is at most one prime L dividing N that's greater than 3, such that the L at a Galois representation is not surjective. And then the third one is that for all L dividing N, L greater than or equal to 37, the level of the L at a Galois representation is less than or equal to 169. Too high. Oh, I won. OK, so the first statement about this positive answer, I think there's strong evidence for it, but probably a proof is still far off. But for cases 2 and 3, I think that those conditions are within reach. And the program that Drew talked about yesterday with Jeremy Rouse and David Zurich-Brown, that should give us 3. And 2 is also being worked on by David Zurich-Brown with Camacho, Lee, Morrow, and Patak. And I think that is also within reach soon. So 2 and 3 should soon no longer be assumptions. And that's just this positive answer. And then we just have to look at those curves. All right, thank you. Good questions for Sarah, so could we just chat a little bit?
Faltings’s theorem on rational points on subvarieties of abelian varieties can be used to show that al butfinitely many algebraic points on a curve arise in families parametrized byP1or positive rank abelian va-rieties; we call these finitely many exceptions isolated points. We study how isolated points behave undermorphisms and then specialize to the case of modular curves. We show that isolated points onX1(n) pushdown to isolated points on a modular curve whose level is bounded by a constant that depends only on thej-invariant of the isolated point. This is joint work with A. Bourdon, O. Ejder, Y. Liu, and F. Odumodu.
10.5446/53534 (DOI)
Thank you for the introduction. And thank you also for the invitation for giving me the opportunity to give this talk on ORA polynomial and application to coding theory. So let's start. So I first would like to introduce what are ORA polynomials. So this, can you see? Really? So it's setting in blue. It's better in black. Okay. Yeah. So what is? Oh. It's okay. So I'm going to write this as a, just copy the notation in the blackboard. So K is a field. So setting is the following. So K is a field. I consider morphism offering from K to K, which I call phi of phi. So morphism offering. And then I consider D from K to K, which is what we call a feed derivation. So what is, by definition, a feed derivation? It's an application from K to K, which is additive and satisfies this twisted lemnitz rule. So D of AB equals phi of A times D of B plus D of A times B. So it's what we call a feed derivation. And you see that if phi is the identity, then this condition is just the usual lemnitz rule. D of AB equals A times D of B plus D of A times B. Okay. And now I can define the O-ring, which I denote by K of X, phi and D. So it's a non-commutative ring in general. It is described as follows. So first I describe its elements. So elements are just usual univariate polynomials over K. So I insist that are univariate polynomials. So the variable is this X and this phi and this D just, in the notation, some extra letters for denoting the twisting is isomorphism. They are not variables. Then the addition is also very natural. It's a standard addition. But the multiplication is a bit different. It's a usual multiplication. It is twisted by this rule. If you have the variable X multiplied by a scalar on the right, A, then it's given by this expression, which is phi of A times X plus D of A. So we have something which is twisted. Okay. So this is the definition of O-ring polynomials. And let me show you some example of computation. So I recall you the rule here. And let me show you how we can compute X square times A. So you just write that it X times X A. And now use the commutation relation to say that X A is phi of A times X plus D of A. You expand everything. You get this expression. Then this X times phi of A, you can just reuse this rule with X replaced by phi of A in order to get an expression of this. Then you get this. X times D of A is something like that. There is not something important for the results. And then we refactor everything. And you get this expression for X square times A. So I think I have commutious that we can compute in the same way X cubed times A and X fourth times A and so on and so forth. And so we have an expression for X to Z times A for any integer N. Okay. And now if we want to compute the product of two O-ring polynomials, for instance two O-ring polynomials of Db2, like this, then you just expand everything without permitting any X and A, B, C, D. You get this expression. And now X cubed, X to the fourth, sorry, is X to the fourth. And the second sum X to the square times CX, you can rewrite it using the expression of X to the square times C. Get some here. And you get this expression. And so on. And you continue for the other summands. I don't want to be complete because the result is quite long. But you get so this computation is here for two, because I would like to, I wanted to convince you that this rule I used to define some multiplication, really define some multiplication in this ring of polynomials. Okay. And here another example. So if you are working over the complex number C, with, and you choose for the twisting isomorphism, the conjugacy, X maps to the conjugate of Z maps to the conjugate of Z. And if you are just taking zero for the derivation, so in what follows, I will often you take D equal zero and so I will omit it in notation. So in this, in this particular ring, then you have this computation, for instance, which is a remarkable identity. So X plus A times A times X minus the conjugate of A is X squared times the norm of A squared. Okay. So you can check this by just by expanding the, sorry, just by expanding the left hand side. And as a consequence, you see that our polynomials actually can admit many, many factorization, for instance, the polynomial X square minus one in this ring as factors as X plus A times X minus A conjugate for all A of norm one. If you can see many factorization for one polynomial, so it's a bit different from standard polynomials. But in any case, our polynomial look like usual polynomials. For several, several features with usual polynomials, for instance, the first thing is that we have a well-notion of degree, the well-defined notion of degree on our polynomial. So this is a usual standard degree. And this degree behave as we expect. So if you have two polynomial, or polynomials F and G, then the degree of the sum is at most the maximum of the degree of the summands. And the degree of the product is the sum of the degree of the terms. As you can imagine. Maybe something which is not so a little bit more unexpected is that this ring of all polynomials is right Euclidean. It admits right Euclidean division. So it's just this. If you have A and B two polynomials with B nonzero, then you can just write A equals QB plus R for Q and R two polynomials which are uniquely determined with this condition and the fact that the degree of the reminder is at most is less than, strictly less than the degree of the divisor. So standard fact accepts that you have to be careful that you really have to write A equals QB plus R and not A equals BQ plus R otherwise false. Anyway. And so the corollary is that this ring of all polynomials K of X, V, D is left principle, meaning that all left ideal or principle are narrated by one element. And you have a notion of right GCD and left LCR. So you have to be careful between left and right. But okay. So this property has really, so you can continue this list. For instance, there is a Euclidean, the usual Euclidean algorithm makes sense in this non-commutative context and so on and so on. Okay. So these are the polynomials. And I will of course come back on this, but this is a very standard property of them. I would just want to present in the first very short first part of my talk. And maybe you are wondering why we are interesting in this polynomials. So I will, in the rest of my talk I will explain some application to coding theory of this setting. But this polynomial was first introduced by O.A. in the 40s, I think, if I remember correctly. And the motivation was not coding theory at that time. It just one, two, it just because very, very standard construction in non-commutative algebra. It really helps. It's really the first example, maybe not the first, but one of the first examples of non-commutative algebra and non-commutative rings. It's just like polynomials are useful for studying rings. O.A. polynomials are really useful for studying non-commutative rings, non-commutating ring theory. It's a very central object. And it's also very useful for studying semi-linear algebra and linear differential equations. So for instance, if phi is the identity, then the ring of O.A. polynomials are nothing but the ring of linear differential operators. And so it's very useful for studying linear differential equations. And if d is zero, then it is related to semi-linear algebra, just like usual polynomials are related to linear algebra. So you have the characteristic polynomial factorization polynomial of random of theorem and so on. And if you are not interested in linear algebra, but semi-linear algebra for some reason, then the natural setting is this, or a polynomials. Okay. But it's not what I want to discuss in this talk. What I want to discuss is application to coding theory because, sorry, some. And so before I probably need to recall you some basics of coding theories. Although probably you will know what I'm, what would be on this slide. Anyway, so first of all, the definition of a linear code. So k is still a field. I gave a field as before. And so a linear code is a subspace, k linear subspace of k to the n for some integer n. Okay. So this integer n is called the length of the code. The dimension of the vector space is called the dimension of the code without any surprise. And another important notion on linear code is the minimal distance. So the minimal distance of a code is the minimal of w of x for any x in c is not zero. So the elements in c are usually called the code walls. So I will probably use this terminology in what follows, but I will try to avoid it. And w of x is what we call the aming weight. And the aming weight by definition of a vector. So x is an element of k to the n. So x1, x2, xn is a number of nonzero coordinates of x. So this defines a distance on k to the n. So the distance between x and y is a number, is a weight of the difference x minus one. And so it's also the number of coordinates on which x and y differ. So here is a beautiful example of linear code. So you can see, I hope, it's not so important to see this number here, but it's seven bits. So it's a natural subspace of f2 to the seven. So it's a linear code of f2 of dimension of length seven. Sorry. It has dimension three because one, two, three, four, five, six, seven, eight elements. And you can see that it's a linear subspace. You can check that it's a linear subspace. And it has a minimal distance four, I think, because each nonzero, actually each nonzero element of this code has four, one, and three, zero. So the minimal, actually it's even better. The weight of any nonzero element is four. So the minimal is also four, of course. Okay. And you have a theorem, a one-run theorem, which is called the singleton bound, saying that if you have any linear code, then there is a relation between the, I mean, inequality, inequality between the distance, the dimension, and the length, what is the minimal distance plus the dimension is, that's why it's at most the length plus one. So it's really easy to prove this theorem, this bound. And we might be interested in those codes for which this bound is actually an equality, as are called in general MDS code for maximal distance separable codes. Okay. So now I want to give you, sorry, I want to give you a very classical construction of codes, which are read solvent codes, which is very important for theory, in the theory. So the setting is the following. We have k and a, k and n, two positive integers with k less than n, or equal, but in general, k would be less than a, n. We are just considering a one, a two, a three, and so on, just until a n, pairwise distinct elements of k. So it's in place in particular that n is less than or equal to the continuity of k. So if you're working over finite fields, and this gives some restriction. And what is associated with salamone code is, I will denote it by read salamone of k and y and two and so on. It's the image of this mapping. So you take k of x, the ring of polynomial ring of k, usual polynomial ring, not twisted. This subscript less than k means that I'm considering all the polynomials of degree less than k, strictly less than k. And when I want, when I have such a polynomial, I can evaluate it at a one, a two, a three, and so on. And I get this way, a triple of n elements in k, and the code is by definition, all the tuples I can obtain this way from a polynomial of degree less than k. So of course, if k is n, then this map is a bijection. But if k is less than n, in general, it's only injective. And so I get a strict subspace of k to the n, which is called the read salamone code. And the theorem is that we really, we know the parameters of this code. So of course, the length is n by definition. It's not difficult to compute. The dimension is k because this map is injective. And the minimal distance is n minus k plus one. It's just, it's full of the fact that the polynomial of degree at most k minus one has at most k minus one zero. And so you see that in that particular case, singleton bound is reached, right? So it's an example of maximum distance parameter code. So this code is interesting for this reason. They are so interesting because they allow for an efficient decoding algorithm. So I don't want to speak about decoding in that talk. So I will skip this, but there are important codes. Okay. So that's for read salamone codes. And now I want to define what are, what are gabidun codes. And I will just show you that it's almost the same. I mean, it's a bit different, but it's the same spirit. So instead of taking a pairwise distinct element in k, I'm taking a linearly independent element of k. So linearly independent of a FP. And now I have to assume that k is a finite field FQ. So in that case, associated gabidun code, which is now denoted by gab of k and a one a n is the image of almost the same map except that I'm not considering all polynomial, but what we call linearized polynomial. And of height, in general, we say height instead of degree now, strictly says nk. And what is a linearized polynomial is something like this. Just a polynomial with a term in x, in x to the p, x to the p square, x to the p cube and so on, until x to the p to the k minus one. So it's called linearized because mapping x maps to x, x maps to x to the p, and so on, are FP linear. And so this is almost the same except that instead of evaluating polynomials of degree less than k, we are evaluating linearized polynomial of height less than k. Otherwise, exactly the same. And the point is that we have exactly the same theorem. The length, of course, is n, the dimension is still k, and the minimal distance is still l minus k plus one. So there are also maximum distance separable codes. And it's even better than k's because minimal distance is n plus k minus one, but for the having distance I've defined earlier, but also for another distance, which is finer in some sense, which is because the wrong distance, and what is the wrong distance? It's a kind of linearized version of aming distance. So instead of counting the number of coordinates of x, which are nonzero, we consider the span of all the coordinates of x, and we count the dimension of the span over FP. So if you have, I mean, if you have, for instance, if you have the vector one, one, one, one, one, then the dimension of the span is one, even if the coordinates are all different from zero. And so this wrong distance is less than the aming distance. It's not because of the aming distance, because it's rather clear. And so having a minimal distance, having the possible minimal distance for the wrong distance is better than having, I mean, yes, it's better than having the maximal minimal distance for the aming distance. So it's even better for this. Okay, so let me, so it was, so this code, so there are code gabidlin codes, but actually, that was first defined by del SART, and then rediscovered some years later by WOT, and then rediscovered a second time by gabidlin. But we are called, there are nowadays code gabidlin codes. So the first of all, they were described in this, in these terms, but now we prefer another way of thinking at them. Okay. Okay. So you see, I have this linearized polynomial, but the map, of course, the map x maps to x to the p is the formalus, x map to x to the p square is the square of the formalus. So in fact, I'm evaluating a polynomial in the formalus in a one, a two, a n. So instead of writing, sorry, instead of, so maybe it's not so, instead of, instead of writing f of a one, f of a two, f of a n, I prefer, I can just say that I'm evaluating a polynomial, I'm evaluating the polynomial f at the formalus and then apply it to a one, apply the map I get to a one a n. So now I no longer have a linearized polynomial, but just a standard polynomial, except that it's a polynomial in the formalus, and the point is that the formalus does not come into, does not come into a scalar. If you apply the formalus and then, sorry, no, if you multiply by a scalar and then apply the formalus, you assume that multiplying, applying the formalus, but multiplying by the scalar to the power p because of linearity. And so you have this commutation relation between the variable x and the scalar, x times a in this ring of polynomial is a to the p times x. And so it's exactly, I mean, we are, the domain is a, is a ORI ring, ORI ring and not a standard ring of polynomials. So, okay, so we can just make further simplifications instead of evaluating f of phi at a one, a two and so on. We can just give the f of phi, the linear mapping. So f of phi is a linear mapping. So he given it at, oh, sorry, yes. So I mean that I'm just mapping f of x to f of phi restricted to some subspace v, where v is a subspace generated by a one, a two, and a n, right? So now the image is not k to the n, but it's a set of homomorphism from v to k. And so instead of, and the rank distance now is a bit different. And so we don't have to compute the rank distance of some vector in k to the n, but so for some of some homomorphism from v to k, so the rank distance of u here is actually, if you, if you try to figure out what is this, this is the span of the core of the, the dimension of the span of the rows or the column of some matrix and just the rank of the matrix. So it's a reason for this one, one distance. Okay. And remember that a one, a two, up to a n, I just forgot it and replaced by the span of this. So instead of the data, the initial data is no longer a one, a n, but a vector space v, which is a, should be here, a sub vector space of k to the n and the gabidulin code I will denote just by gabidulin of k and v. So let me, let me maybe just read, read this once again. So I consider v a sub f p vector space of some f q and then the actuality gabidulin code, which is denoted by gabidulin of k and v is the image of this map mapping or a polynomial to this or a polynomial evaluated at the problem is restricted to v. If you don't want to restrict it, see what you can do also can take the equals case, perfect choice. Yeah. So what do we care about the wrong distance? So of course we can say that it's a very beautiful mathematical construction and that in, in fact, oh, I point out yours. No, sorry, the gabidulin code were defined maybe 50 years ago and indeed there were more or less forgotten, not exactly, but not, not that much, not such work has been done on it for some time and nowadays there is a much more attention on them. So because of practical applications, so there are a few of them, but the most standard one is application to what we call network coding. So network coding is, is this problematic. We have many computers distributed all over the world and this computer which is a server say and this one also has a message a and this one has a message b and this computer wants to transmit this message a to the computer here below and to this one. So two on the, on the bottom and the same for this computer, this server. So the message a should arrive here and there and the message b should arrive here and there as well. So what we can do of course is something like this for instance, you let the message a transmit, go to this pipe and the message a go to this, to this path for instance and the same for the message b, but you say that on the solution this, this pipe is used two times and so it's not so good, so good for the bandwidths. And you can just try to figure out what can be done, but in any, any case you have to use a pipe two times. But you can think at another solution which is the following. So I transmit a here and a and there and b here and there and so this computer has a, this computer receives a, this one receives b and this one receives a and b. And instead of transmitting a and b, it can just compute the sum of a and b, say a and b are elements in some field. I don't know, it's not so important. You can just view them as elements in some any vector space and so it computes a plus b and transmits the sum a plus b and then this one transmit a plus b on the left and then plus b on the right and so it means that this computer has received a and a plus b and so you can just by making the difference reconstruct b and the same on the right. Sorry. And so with this solution we see that we are, we are managed to transmit a and b to the both, we managed to transmit a and b to the both computers we, as we wanted to using only each pipe one time. So it's a solution which is more, more important using practice for network transmission and you see that the theory of natural, I mean distance as a classical coding theory is not really relevant in that case because I mean what we want to do, the problem we have to face is for instance if at some point some server is down or is infected by the various or is, I don't know, there is some problem with some server and in that case the whole message is corrupted. And so classical coding theory is not relevant but coding theory with respect to the wrong distance becomes relevant because if some compute, if for instance this message is altered then it will, eventually it will give an error of rank one and if two computers are, are, are, are, I don't know, are infected or something are down or I don't know, then, so resulting in a wrong, we have rank two and so on. And so because we are making all, lead our transformations. And so it's, it's a way to, to prevent, to, I mean to, to, to prevent communication in this model. So it's a reason why gabelian codes are important nowadays. Anyway, you, okay. So there is still another, no, okay. So there is still another generalization of gabelian codes that are related to present now, which is more recent. And for this I need to introduce more evaluation morphisms. So I have new assumptions. So for now I will assume that K is a field equipped with fee as before but K is no longer, needs, needs no longer to be a finite field. It can be any field. And I introduce this subfield F, which is a field of elements fixed by K. So F is a set of, a subfield of K consisting of elements such that F of X equal X. Right. And I assume that K over F, F is finite. So finite extension. So it's a Galois extension. Simply Galois extension with Galois group is generated by fee. In particular, it implies that fee has finite order. It's rejected. So as I said in the definition of gabelian codes, we have encoders, this evaluation morphism mapping F of X to F of phi. And the fact that we are working with our polynomial from the left ensures that this evaluation morphism is actually a morphism of rings. Commute with a multiplication. But actually, more generally, if we have any semilinear mapping from V to V, where V is a K vector space, so semilinear mapping is something which is additive and satisfies this action. So alpha of A X equals fee of A times alpha of X. So it's like linear mapping except that we have to twist the scalar by twisting morphism. So in that case, we have also well-defined evaluation morphism consisting in mapping an O-way polynomial F of X to F of alpha. Yeah. Sorry. Here the evaluation morphism of G of K or just K. The morphism of evaluation. And the morphism of K. So I don't have any question. Okay. So you can evaluate not only at fee, but at any alpha, which is semilinear mapping acting on some K vector space. And the proposition is that it's a classification proposition. It's rather straightforward to prove this proposition. But the semilinear mapping of K from K to K are just a multiplier of five. A times five for some A and K. And so using this, you see what you can do. Instead of evaluating only at fee, we can just evaluate at other alpha of this form and get this way of more general gabelian codes. So it's what we call the linearized gabelian codes, which were defined by Martin Espinaz recently. And also in a different spirit, with different notation by Delphine Boucher. But I will use the presentation of Martin Espinaz, which is more suitable for what I wanted to say here, today. So I consider as before K and M two integers. And A1, AM, elements of nonzero elements of K. And I assume that the norm of the AIs, the norm of K over A are pairwise distinct. So now the right condition. And I'm considering also V1, V2, Vm sub F vector spaces of K. So I'll play the role of the V as a previous slide. And then the linearized read salomon code, that's the city to this data, which is denoted by LRS, linearized read salomon of K, A1, V1, A2, V2, AM, Vm. It's the easy major of this mapping. So you take some array polynomial and we evaluate it at A1 phi, A2 phi, AM phi. So it's something which makes the gabelian construction and the read salomon construction. So now the natural points for evaluating array polynomials are these sublinear mappings. And I consider many of these sublinear mappings, A1, A1 phi, A1 phi, A2 phi, A3. And I evaluate the array polynomials at these points. Okay. And theorem is almost the same as before, is that the length of this code is, so it's a computation is a bit, the computation is, so we have to compute the dimension over K of the codomain of this map, which is a dimension of V1 plus a dimension of V2 plus a dimension of Vm. Sorry. Yeah. Yes, because A1 phi is a mapping from K to K. So the length is this. Yeah. So we can imagine that we are considering more general sublinear morphism and yeah, but I'm just sticking to this. So the length is n, the dimension is K, and again the minimal distance is n minus K plus 1. So say the minimal distance for the aming distance, but actually it's a minimal distance for refined distances before, which is called the same rank distance. So you have elements u1, um, ui lies is a morphism from Vi to K, and now you samed the rank of all this ui. Okay. So it's the notion of linearized read-salomon code, and I told you before that the gabin reading code were used for network coding and linearized read-salomon codes are also application in the concrete life and used for what we call the multi-shot linearized, multi-shot network coding. So I don't want to say much about this because I don't know that much actually, but it's not only a beautiful mathematical construction, it's also, it also has applications. Yeah. So now I would like to, maybe I can give this in the blackboard for a moment, and I would like to give you, to reinterpret this construction in a more general metric setting, spirit. So for this I need to give, to explain more things on orypoenomials. So let's first, let me first introduce this notation. R is the dimension of K over half. So it's the order of phi. And I consider this ring, which is the ring of Laurent orypoenomial, I don't know, so it's a orypoenomial in X and X minus one. And because phi is objective, it's not a problem because we have this commutation relation, which is X minus one times A is phi minus one of A times X minus one. So this ring has two important subrings. The first one is F to the X to the R and X to the minus R, Laurent polynomials in F in the variable X to the R. And it's actually the center of A. So it's quite, we'll play a quite important role, you can check this, it's not, it's not difficult. And in between, maybe I will write this again on the blackboard. So here you have A, which is K of X. Here you have the center and in between you have C, which is K of X to the R and to the minus R. So this is the center of A and C is the maximal sub-comutative sub-algebra. Okay. And what you can do is you can extend scalar to C from Z to C. And in that case on the, here, this way. So you have the tensor product. And since this C over Z is a Galois covering because it just obtained by extending the scalar from K to F, which is a Galois extension, then C times C, sorry, C tensor C over Z is just split as a product of copies of C, of R copies of C, C to the R. And here you have C tensor A and it turns out that C tensor A are also a very interesting interpretation. We have this isomorphism, C tensor A isomorphic to the algebra of C linear transformation of A. So the mapping is following. Okay. So C tensor A maps to the map, C acts by multiplication on the left and A acts by multiplication on the right. And the important point is that A is a C module, of course, but which is free of rank R. The basis, for instance, is 1 X, X square, X to the cube and X to the R minus 1. And so this is just a matrix algebra over C of dimension with R rows and R columns. So A is some or a polynomial, but when we extend scalar from Z to C, then we get a matrix algebra. And if you know algebraic geometry, Mike knows that this is a definition, almost a definition of Azumaya algebra. So Azumaya algebra is an algebra, we become isomorphic to a matrix algebra after an et al extension. And here's an extension C over Z is et al. Because it just, it comes from, as I said before, it's a Galois covering, it comes from the K over, it's deduced by scalar extension from the extension K over F, which is Galois. So we have an Azumaya algebra and when you have an Azumaya algebra, you have a lot of interesting facts. So for instance, we have what we call the radius trace coming from the trace. So when you have a matrix algebra, you have the trace determinant and so on. And this descend to A. So you have a radius trace, a radius trace map, which comes from A to Z, I don't know, which is given by this explicit formula. It's not so important if you can see the formula, just to say that it's explicit. And in the same way, you have the radius norm and the radius norm corresponding to the determinant. And also you have an explicit formula only for polynomial of degree one, otherwise much more complicated, but anyway, I won't show them. And now I just rephrase this. So it's a commutative variable I had before. And now I can just make a picture instead of diagrams. So I draw the spectrum of the center, spectrum of the, just a straight line. It's A1 with 1.3 moved. I have a covering of spec Z by spec C, which is a straight line, but defined over K, not over F. I have another copy of spec C here, and now I have the tensor product. So it's a fiber product and is a Dijon student of R copies of spec C, so R equals 2 in my drawing. And so above one point, I have two points. And I can consider the vector space, the K vector space generated by these two points, which are drawn here. And when I let the point vary on the axis, on the axis, I can just make the construction vary on the, so the vector space move on the top. And what I get is a vector bundle. And this isomorphism, saying that C tensor by A over Z is equal to some matrix algebra, just says that C tensor A is the under morphism of this vector bundle. And in fact, we can just, you can just let phi act on the under morphism of this vector bundle, and so A can be recovered under morphism of this vector model stable by phi. So I don't know how to draw spec A because A is non-commutative and I'm not really, I don't really know how to draw the spectrum of a non-commutative algebra, so I just don't do it, but I'm doing the under morphism of the vector bundle. Can you say again what the vector bundle, what the fibers, it's a vector bundle over C? It's a vector bundle over C whose fiber is just, so it's a trivial vector bundle over C of dimension R. And if I have some point Z on the, on spec Z, I can try to figure out what is the fiber of A over Z, which is by definition this quotient. And by the general theory of azimaya algebra, you know that it's a simple central algebra over Z. The point is that this vector bundle does not descend to C. And so it's a simple central algebra, which is in general not under morphism of algebra, just a simple central algebra, but for instance, if you are working over a finite field, all simple central algebra are matrix algebra. And in fact, we have characterization. This simple central algebra is split if and only if Z is a norm in this extension. In that case, yeah, so in that case, it means that if it splits, it's under morphism of some vector space over Z. And so I'm just drawing this vector space here, but you have to be very careful. This is the vector bundle that doesn't descend on Z, so just the fiber here. And so in that case, this isomorphism, I mean the splitting of this isomorphism is given by this map, this map evaluation of A phi I've defined before. So this evaluation of mapping f of X to f of A phi, realize this splitting. And so it's a way to see that the fiber of A over Z acting on this vector space. And so what is linearized with Salomon code, you just have some element of A, and each time you have a point on spec Z, you can evaluate it. You have this evaluation map, this evaluation map gives you an an anamorphism of K, and K can be seen as something laying above Z. So what you have to maybe to retain is that the gabidrin code actually is something laying over, leaving over a point, the point one, we have a vector bundle of dimension, vector space of dimension one over a point, and the linearized gabidrin code is the same situation, but we just evaluate that not only at one, but at many other points. Okay so it's a drawing. And you can wonder if we can do this in a more general situation, instead of having a covering of P1 by P1 or A1 with one point removed by A1 with one point removed, can try to do this in a more geometric situation, we have a curve on the bottom and another covering which is a cyclic Galois covering. But it's actually a working progress, I won't say so much about this, but it's something which would be linearized geometric codes or something like this. So it's a very, very early stage, so I just started this work with Amore Drouin, we will be a PhD student of mine next year. So in the five, ten minutes I have, I will just want to speak about duality. Maybe I won't finish what I want to say about this, but I think it's important. So the starting point is this theorem. If you have a read salomon code, which was defined, I remember by this evaluation formula in the very classical setting, then we can define the dual of this code. So by definition the dual is the orthogonal for the usual standard product of K to the N, on K to the N, natural pairing. And the dual of this code is the image of, can be described as the image of an anothomapping, given in terms of residues. This is quite interesting because the dual of a code in general is defined, is an intersection of hyperplane, but it is described as an image of something else. The proof of this is not difficult, it is a residue formula and an argument dimension. But there is an interesting corollary that using this description you can prove that the dual of read salomon codes actually also meet the single ton bound. So not only the read salomon codes are MDS, but the dual of them are also MDS. And what I want to do is try to generalize this result to a linearized read salomon code. So if I have the dual of a linearized read salomon code, which is this code I defined before, I want to have a description of the image of the dual. So there are some difficulties. So of course we are now looking for the image of something coming from, defined in terms of orypoenomials. The problem is that we don't know what is the residue for orypoenomials, we have to define this. And the second point is that now we no longer have something with the image in K to the end, but in this andomorphism of K to the M, and so we have to define a pairing on this space. So there are two points to do, two things to do. So I'm starting with a residue. For this I need to figure out what is a good orypoenomials. So the expansion is nothing but trying to evaluate orypoenomials on some thickening. And so I say orypoenomials, andomorphism of some vector models. The point is that on the left hand side, of course, this is a trigger vector model, you can just consider the image of a thickening of a point, and then you get a thickening of the vector models, no problem. But on the right side, it's not so clear what you get. And here comes a theorem saying that if you have a close point in spexies, that is an orypoenomial in Z, then there is an isomorphism between this projective limit corresponding to the function of the thickening and a ring of power series ring over the special fiber. And okay, maps A and 2H. And from this we can define a residue of orypoenomials. So first of all I need to define the fraction field of A, because if I want a residue I want a fraction. So it's defined as A tensor of fractions, quotient. So okay, I skip this. And now I have the residue map, so I'm considering N a reducible polynomial in Z. If I have a orypoenomial I can just map it to it, consider this Taylor expansion. And I can use the previous theorem to see the Taylor expansion as a real, I mean as a concrete series in H. And I can just consider the fraction field on the left, then it's since N maps to H, it maps to this projective limit where N is inverted. And on the right hand side it maps to a low run series with coefficient in A over N A. And now I can consider the coefficient in H minus 1 as for the usual residues. And I call this composite a residue at N. And I can define the residue at A as the residue at this polynomial which is the point Z over which A is defined and I evaluate at A is this. And this way I get an endomorphism of K. Okay, yeah, just a minute. So I just want one slide. And now I can define the pairing of the endomorphism of A for this I need to proceed by steps. Okay, so I first define a pairing of K which is a usual pairing given by the trace. Sorry. Then I can define a pairing of the endomorphism of K by taking this usual formula, the trace of the I joint of U composed with V. And then I have a pairing on the mapping which is the sum of the trace. Okay. And this can actually be put back on our polynomials. So the pairing is defined by this formula. You see that the trace and the I joint and the turn out that the trace and the I joint have some analogs on our polynomials. So for the trace we have seen that we have a radius trace here and it's easy to prove that the radius says it makes this I and I commute so it's when you evaluate. So after evaluation the radius trace becomes a trace. And it already himself define a pairing on our polynomials which is defined by this formula. So we define a junction on our polynomial defined by this formula. Sum of A I X I is the I joint of this is the sum of X minus Y A I. And after evaluation this becomes a natural pairing. This adjuncts becomes the I joint I defined before on the anamorphism of K. And using this you see that the pairing of evaluation of G and evaluation of F at this point is actually the radius trace of G star F evaluated at some point. And you can yes, yes, yes, yes, finished. And you have the same thing for the residue. The pairing of the residue and evaluation is the residue of the radius trace of something. And using this you can finally prove this theorem which is that the dual of the linearized with Salomon code is the image of this, of a code like this, where you have the residue at A I inverse of something like this. And the corollary of this is that the dual of linearized with Salomon code means a singleton bond as well as for with Salomon codes. And I'm done. Thank you.
In the 1930’s, in the course of developing non-commutative algebra, Ore introduced a twisted version of polynomials in which the scalars do not commute with the variable. About fifty years later, Delsarte, Rothand Gabidulin realized (independently) that Ore polynomials could be used to define codes—nowadays calledGabidulin codes—exhibiting good properties with respect to the rank distance. More recently, Gabidulincodes have received much attention because of many promising applications to network coding, distributedstorage and cryptography. The first part of my talk will be devoted to review the classical construction of Gabidulin codes and presenta recent extension due to Martinez-Penas and Boucher (independently), offering similar performances butallowing for transmitting much longer messages in one shot. I will then revisit Martinez-Penas’ and Boucher’sconstructions and give to them a geometric flavour. Based on this, I will derive a geometric description ofduals of these codes and finally speculate on the existence of more general geometric Gabidulin codes.
10.5446/53535 (DOI)
Okay, thank you very much for the organizer to invite me to give this lecture. Well, this conference is also attributed to Gilles Lachaud and the next two speakers will talk more about it, but I like to tell also something about Gilles Lachaud. It's not that I know him well personally and I've never cooperated with him, but I've been here at this conference six times before. The first conference I attended, HGCT was in 1989 and the next four I also attended. And so what I like to do is give some history from my personal perspective on algebraic geometry codes. And okay, so I will, for that part, I will use the blackboard. So and what I will explain some history about algebraic geometry codes and in particular about what happened and how the this tower of Gassia Stichternot was invented or how it came or in fact it was more or less an accident. And in that part I like to explain or tell and Al Bassa explained very nice lecture yesterday about the mathematics and he mentioned my name, but my name is honestly for a very small part and the part is that I changed the three in two or two. I will explain that. So some history. So like the first HGCT was in 87 I think. And so the prehistory is Goppa invented the codes using algebraic geometry curves over finite fields and you had to trust one flat of thing result, the HARA things like that. And so then for that was the Gilbert Fashemov bound was beaten for well larger field sizes, but okay these codes were there. Engineers were impressed in particularly the electronic electric engineers, the IEEE community. They said, well okay, well that's nice, but can we understand it? Well that was a problem and a lot of deep mathematics was involved and so and there was another thing. These codes were invented, they were there, but there was no decoding algorithm. So when I started in Eindhoven Technical University people were very much interested in the topic but they had not the right background, so I came from Singularity Theory, applied for a position I was hired and but I didn't know anything about coding theory or about combinatorics. So but I was very lucky because at that time 87, 88, the first decoding algorithm was proposed and by Danish people, Justice Tom Hale was involved in that and then later it was that was for plane curves and Skoda Bogatov and Seja Vladovts. They generalised it for arbitrary curves and so that I could contribute there in this direction and so that's I think the reason I was invited for my first AGT conference. But so but here you had this decoding up to, well this is the design minimum distance of the AG code and well that's the number of errors you want to correct to have the minimum distance or the half the design minimum distance but there was a G involved and so it's not the best you like to have and so I contributed there something but it was not effective. I mean but you must understand that you have this, you had these two communities, algebraic geometry I'd say, call it AG and you have say coding theory and well this is say deep mathematics and this is the IEEE, the engineering community where well they really use codes, it's really implemented and so that is very applied and so we live in here in this intersection which I called AGC, Algebraic Geometry Codes and well and this community was well okay we can apply it, forget about that part, the application we continue on this topic and this community said well okay this result is there but we like to understand it and well that was the problem as I said before and then there was a movement it said well AG codes without Algebraic Geometry so I put a banner here, that was really a movement and you say well that's stupid why would you do that just learn Algebraic Geometry and then well then you are 10 years further of course but so well this was really a movement. One of the persons was a playhut and he advocated that and he really gave lectures with this title and well you understand I'm in a technical university and the background here so I was really in the middle and I tried to explain what was going on there but yeah well what was their background so I was really well interested in accomplishing that so okay what happened and then was you had Fang Rowe, they came visit decoding algorithm majority voting of unknown syndromes so let me mention it and the result was they could really efficiently decode up to half the designed MIM distance and it was very nice idea the only problem was the paper was as follows they had a elaborate example and showed how the algorithm worked and then they stated the theorem and the proof was C example well mathematicians are not satisfied with this and then was I showed it to Iwanderzma he was working on his PhD and he came up with the proof so I can say the idea is here but he gave the proof so of course you need both and but that was because it's so basic and so Fang and Rowe they heard about this slogan AG calls without AG so they continued and what out of this decoding algorithm usually if you have a decoding algorithm you can say well look better at it then you get a bound on the minimum distance and that came out that is the Fang Rowe bound or the order bound and that is you can phrase it in a very general terms without using any algebraic geometry at all so that was already a step and in trying to attain this goal of AG codes without algebraic geometry and then he came with another step again Fang Rowe and what he did is the following again his idea say look you have a equation and this is the equation so that is you know a well-known Klein curve and that is over F8 and it has you can easily see if X is nonzero then you have three nonzero solutions in F8 so that was the idea so what he what what is there so you get a tower and so you have well it's given by in the variables and you use the same equation but you shift the two variables Xi and Xi plus one and that is for Is one of two and minus one and then of course you buy this remark for every nonzero actually have three nonzero Y so you have of course the number of points of F8 is at least well you start with three and no seven times three to the power m minus one so that idea is really his or theirs Fang Rowe and and then he used this order bound kind of argument very explicit to show well he claimed that to get asymptotically good codes no edge of big geometry involved except then of course this equation but no Riemann-Roch no modular curves nothing and but of course when I saw this result is a well okay that's nice this grows fast and he claims that the parameters are giving asymptotically good codes but of course I was interested in the genius what happens with the genius what is the genius so my guess was it goes goes much faster than the number of points and I knew of a result of Henning Stichter not and try and Rick a negative result in a certain power of function fields then this this quotient of number rational puns divided by the genius I mean yeah that get goes to zero and so it's asymptotically bad so I sent an email to him and it was nice I mean history and I still have the emails and and they are August 1994 so 9 August 1994 I sent an email to Henning Stichter not sorry yeah thank you so and he appeared that he was in in impact in Rio de Janeiro in Brazil and I asked him the question so well look you have this negative result of towers and can you tell me whether that is applies here and then he said well well look I worked very hard with our Arnoldo Garcia and and I have want to have two weeks vacation in Brazil but okay well my first guess is I think not because well but it's difficult because it's not a Galois extension and it's a wild ramification so okay and but well he was still was thinking about it and and so two weeks later so he tried something like this an equation well it looks not like a polynomial equation for but for him it's always function fields so that is just defining the extension recursively and but did not did not work so it is a negative result at least he could show in this case that it doesn't work that you get as in total to that cut tower so so let me see well I can put here 29 of August the same year and then the next day so well look here sorry sorry okay so the idea is you have a polynomial f in two variables right and then you do this and then the question is which f should I use in two variables so thing row use that f the Klein equation and then the techno tried this one yeah so yeah right yeah okay so and at least he could give a negative result but of course it didn't answer my original question so but then I said okay what about this and here comes the change of three and two and now here also the three is changing it to why is that well just for a simple reason that here you have for every nonzero x you have two solutions which are nonzero and here over f8 and here over f4 for every nonzero x you have two solution y nonzero and that's the only reason and I expected the negative results right and and that's only true if a problem is too hard you make it simpler that's the only reason and then try to solve that problem and then the next day he said well I think it's a positive result sorry no no this one with this x y a pie he had a positive result one yeah oh yes of course yes that's important and yeah so yeah sorry thank you okay now it's correct so I think I think plus of all positive result and in the next email he said even said well the general result I have a general result together well well we think of a general result here you couldn't prove it at that time so let me write it correctly over and that is so so let me quote now is is in German but I will translate so we haven't I'm a on string in the water hinder on side I translate and continue in English we had a demanding week behind us but I believe it was worthwhile the sequence is indeed asymptotically good and it even attains the drift of flat it bound that is the number of points divided by G goes to you might for every FQ squared so that is the history so so I ruined his vacation but we have a really wonderful result and it's a really a miracle how this happened so far this history and yes so and that is my tribute to Jill and oh yes I should say also for me Jill is if you think about algebraic geometry codes for me it's located here in Lumine and it's really the center for me always has been the center where the action was in algebraic geometry codes and I think that we can attribute this to Jill I show okay now my lecture and I spent some time on Jill and it's important and I don't know what I can explain everything here but that is not so important the main message I want to convey here is that well the way to numerator there is some theory and I think we understand it more or less it is hard to compute the way to numerator whatever you way you do it it's hard but the code set leader way to numerator is much harder is much much harder and there's not much known about it and so even so for a conic you can compute it even for the twisted cubic it is very you must analyze it very in very much detail to get the result to get a close result and for that well I should mention that is joint work with art blockhouse and Thomas Sony and we use the classification of twisted cubic in the projectivities of the twisted cubic in the rational curve now normal rational curve in p3 of degree 3 by Bruin and Hirschfeld and I thought well I did this for the conic the coach at leader way to numerator with really in the URIs and I thought well let's ask art blockhouse he's my colleague in the same corridor about the twisted cubic he knows everything about the twisted cubic and he thought so too and I said well but it turns out to be very interesting demanding and well you you you really have to go deep into the details of this classification of Bruin and Hirschfeld so first some elementary things we all know so five of cute the finite field the weight of a vector is just a number of nonzero entries the hamming distance is the number of positions where two vectors are different the difference between two x and y is the weight of the difference the mean this you have the NKD sub q code that is an FQ linear code of dimension K and FQ to the end of minimum distance D so that is the minimum distance between two nonzero code words and it's for linear code to the same as the minimum weight of a nonzero code word and then you have the notion of a degenerate code and that is important in the sequel a general code is degenerative there is a position where all code words are zero well they say well that's not so interesting just throw away that position and shorten it you cannot possibly use that but for arguments that's of course important that they are there so okay so we have to generate a matrix of a linear code that is of dimension K it's a K by N matrix such that the rows generate our basis of the code and of course you can write it as the null space of a paratitech matrix that's the H okay so we have the inner product well known the standard inner product you have the dual code which all factors which are perpendicular with inner product sir with all code words and the generator matrix of a K of a code C is the paratitech matrix of the dual code so now we come to the notion of the weight enumerator so the AW is the number of code words of weight W yeah so and then we can put them just in a polynomial whether you do it in one variable Z or homogeneously in two variables X and Y I prefer this one we come to that in a minute where it's comfortable to have it in this way and well for every W which is positive if you have a code word nonzero code word you can multiply it with a scalar nonzero scalar you get again a code word of the same weight so every AW for W bigger than zero that number is divisible by Q minus one and then I divide out by Q minus one in a sense you consider the problem projectively and the AW bar is the number of say the projective points so why are people interested in the weight enumerator not only as a mathematician to classify because of course it's it you get a crude classification of codes but it has a probabilistic interpretation in a Q area symmetric channel with crossover probability small p then oh this this term is of course the number of code words or the probability that a measured sense is without errors and then you have this and that is if you subtract that then that's the probability that at the receiving end you have an undetected error so if if a code word is sent and and on the receiving end is again a code word with different from the original code word you will not detected because you said well it's a code word there's probably the code word sent well there's some probability that this happens and that is you can put it or you can compute it in terms of the weight enumerator there's another thing you can compute so if you have a bounded distance decoder so you decode up to say half the meme distance and then these spheres around code words that is the part where you decode says if this is sent and that is received then the decoder will give you the correct code word but what can happen if you are a lot of errors then it will send it to a closest code word but it's not the one you have sent the end of course there's an area where well the the decoder can do nothing because it's outside half the meme distance so then you have a failure okay so this is a very complicated formula again in terms of the weights here and this is the crossover probability and this is a combinatorial number I can also give a completely complicated formula in terms of binomials but it is just the the number of vectors in fqn of weight w that are a distance s of a given vector of weight v and this given vector is independent of the mean you can choose anyone you can choose the well any vector of the weight v okay just to assure you that it has an application and engineers are interested in it okay so what I will do in the following is that I look not only at the code but also at the extended code so extended by scalars so I use this notation the tensor product of the extended field the field extension of degree m and well it is just the same code but with the scalars in fqm so if you generate a matrix with entries in fq you use you can use the same generator matrix but then the coefficients are now allowed to be in fq to the m so that is the next remark is stated here and so now I come to a topic the weight enumerator of the of a code in terms of arrangements and well of course there's some finite geometry involved in here there is a lot of finite geometry because it was already well known that the connection between codes and then find a number of points in in projective space right and that is more in the sense of a projective system in a small paper by Kathmandu Sosman which appealed very much to me when I read or have read it and you find it also this idea in the book of Sosman Flatter projective system it's just a number of points an entry full of points in projective space of dimension are and but the only assumption is that they do not lie in a hyperplane so and if you have a generator matrix of a non-degenerate code that means all the columns of the generator matrix are non-zero so you can view these columns now as projective points in projective space of dimension k minus one right and in this way you get a projective system associated with the generator matrix and conversely if you start with a projective system you can make the you use the coordinates of the the points but now it's columns in a generator matrix you get a k by n matrix and by this assumption that they do not lie in a hyperplane the rank of this matrix is k and so you get an nk code right and then you say well what is about the equivalence you have to generalize equivalence of codes and you have projective transformations of the projective space and you can permute the points so in fact these two theories are equivalent theories and then the question is what is the mean distance how you how do you express the mean distance so if you have a code word you can write this code word as a say a message times the generator matrix this message lies in fq to the k and now you consider the hyperplane with the coefficients m1 up to mk as coefficients of the linear equations homogenous and linear equations and then you have the number of points in of this projective system that lie in the hyperplane is equal to the the n minus the weight of the code word right so you have a very nice interpretation in terms of projectives system of of of the mean distance and of course you can say well what is aw and now I consider this hyperplane over the projective space it's the number of hyperplanes in projective space of dimension k minus one with exactly n minus w points of this oh that should be a g there of the projective system of the generator matrix okay well that's nice but duly you can well okay so that is also an important remark you have the notion of well I suppose we have a non-degenerate code and an mbs code that is a code that attains the singleton bound so that is an n k n minus k plus one code well the largest mean distance you can get according to the singleton bound and that is an mbs code if and only if the points of the projective system are in general position so that's to say there are at most k minus one points of this projective system lying in any hyperplane right so well these kind of things were well known finite geometry people and and and arcs and and that's another terminology involved in there but you have this translation this is well known and but duly you have an arrangement so instead of points in projective you lose a linear hyperplanes in projective space or you can also consider it I do both in fqk and I assume that it isn't what people call an essential hyperplane and central but I just I call it a arrangement if the intersection of all these hyperplanes in fq if you consider it here is the origin and here then it's empty so here again you have the generator matrix the coefficients of a column corresponds with an equation homogenous linear equation of a hyperplane and in this way you get n hyperplanes and you put them in an n-tupel and that's the associated arrangement of the code and again conversely you can start with an arrangement over finite field and you have a code these are again of course equivalent theories and here you also have what is the minimum distance now n minus the weight of a code word is equal to the number of hyperplanes of this arrangement going through this point and now this point is considered as homogenous coordinate right and again the dual statement is aw bar is the number of points in the projective space such that n minus w hyperplanes exactly this number goes through this point so now picture so you have these two dual pictures so this is the projective systems that this means these four points and they are now in general positions and you see well the maximum number of points on a hyperplane here align in the projective plane is two so four minus two is two so that's the minimum distance and so it's in the plane so the dimension of the code is three and duly as an arrangement you have now the four lines the four hyperplanes and and you see well what is the maximum number of lines going through a point that is everywhere two so indeed again of course the minimum distance is two and from this picture it is easy to compute the weight enumerator of course this is a very simple example just to get the idea but you know I'm sure that after this lecture you just draw any line arrangement and you compute your weight enumerator from there that is an exercise you can do and and and so in particular the an so the length of the code is n and I view it now projectively a n bar it's just the number of points in the complement of the union of the uh hyperplanes of the arrangements and of course that is computed also by people using the zeta function of this complement of the hyperplane and uh well and then well you see the principle of inclusion exclusion that's computed is just the number of points in the projective space and then you this number you have to compute that is an alternating sum if w is zero it's the whole space that's the empty intersection and if it's an from this you immediately see can compute what the number is if the hyperplanes are in general positions because then this is has dimension n minus w and you have n choose w ways to choose it so you that is you have a closed formula for that it's easy we will see it in a minute in another way but that is the idea for a n so the maximal weight and number of code words are projectively seen in this way okay so there is some some uh technicalities involved and I go quickly through it well that you find in the paper of custom trust month and you just look at the space cg of the code words which are zero at a certain set j and well that is now here this intersection is not seen projectively but in a vector space and this intersection is as a vector space these are two isomorphic and then you compute the dimension you you compute the bj here it is the q to the power lj and the bt is the sum over all subsets j of the bj's and so I quickly go through it and and well there is this extended version instead of over fq we want to look at it for the extended code and we introduce an x variable because look here if you compute this for for q well what you in fact do you compute this for all subset j and um it has just linear algebra computing a rank of a matrix and if you've done that then you know the weight enumerator not only of fq but for all extensions and that turns out to be so and so that's why instead of the q I use the t uh and uh if you put the qm in there so you get the weight enumerator so this is you get the following in relation between between the bt and the aw and of course uh they determine each other and this uh there is a minus sign gone uh it should be alternating some I have to correct that sorry uh okay but anyway this is important result uh that the weight enumerator is now in terms of this bt and you have seen this bt is just computing this uh dimensions of subcodes which are zero at certain positions and for the extended version you have the t involved here and just computing this bt computing these dimensions and so indeed we have if you look at the number of code words of the extended code of fqm of way w is just putting in t the q to the power m right and that is this formula here and uh well in a sense is in a sense a modific version because for the weight enumerator you count the number of points and then it should be fine it to get the number but in in this version there is no counting of points it's counting of dimensions and and and then you can your field your field of scalars can be any field right okay so uh right so here I think that is the right version with the alternate alternate sign and this is well the mds code uh the number of code words of a fixed weight w is given by this formula and for the extended version uh it is this formula and we in a sense we have seen that already in this inclusion exclusion uh slide okay so here you can compute it so for this simple example so uh a1 is of course zero uh there is uh no point lying on three lines there are six points lying on two lines so that is here so I compute the projected version so in that is six points and then you have uh four lines and uh on the line you have to exclude these three points so that is t plus one minus three that is t minus two and then you've always if you've computed two of them then the sum is the total number of points in the projective space that is uh sd squared plus t plus one well it's hardly visible huh so I erase it so a1 is zero there are no three points on there is no point on three line there are six points the red dots right so that is the arrangement uh explanation uh on two lines so four minus two is two and then then you look at the points which are exactly on one line so there are six lines and uh and then but you have to exclude these three points on every line so that is you have four lines and then t plus one is the number of points on the projective line and you have to exclude three points so that's t minus two and then I say well the a1 uh uh well the bar that is the number of points on the projective line uh the project is playing and from that it's if you have the last one you never have to compute because you have this relation okay so I I I hope you understand this and then in your favorite example uh if it's a three-dimensional code you draw the picture and you can do by hand compute the weight enumerator and even the extended weight enumerator so now uh just give a overview of the connections so we have the extended weight enumerator uh well there's also this notion of the the generalized weight that is considering subcodes of a certain weight and that is not over the extended version but all for all r for all possible subcodes of dimension one up to k that gives the generalized weight enumerator for all r and this set of uh polynomials they determine each other so this gives the whole sequence they're all determined each other then green gives the connection of the the third polynomial of a matrix in terms of the weight enumerator so you have a code and you have the according you know you have the matrix of the code and that determines the third polynomial the the dichromatic the binomial third polynomial and that determines the weight enumerator and it goes both way and um you have also the uh the notion of the co-boundary or two-variable characteristic polynomial of the geometric letters geometric letters of matrix almost the same and then you have the so-called two-variable zeta function introduced by Ivan Durstma and uh they all determine each other there is explicit formulas going from one to the other and that is all you can find in the phd thesis or in the uris and well of course uh well not of course but the ordinary weight enumerator is really weaker environment than the extended weight enumerator so so why do i tell this all because i want to talk about in my last five minutes about the co-set leader weight enumerator because um that is uh because you have to understand it first the approach in terms of projective geometry and uh and then the projective system and the arrangement both show up you have to use them both and you have to use the projective system and now not of the generator matrix but of the parity check matrix so i give a rough outline because i have not much time you have now you use not the generator matrix with the parity check matrix you consider the columns of this and uh that gives a projective system right and uh but what is the weight of a co-set that is just in this co-set you look at the minimal weight of an element in this co-set right a co-set leader is an element a choice there might be many but you just choose one of minimal weight in this co-set that's called the co-set leader and then a instead of a we call using an alpha i the number of co-sets of a code that are of a given weight i and then you can put this number again sorry in a polynomial the homogeneous version and um that uh okay that's called the co-set leader weight enumerator the co-set leader okay so why why do people consider this because it computes some probability the main probabilistic thing you want to know as an engineer about the behavior of a channel with a given crossover probability first you have a co-set leader decoder you have first the preprocessing there's one way to do it you have just a list of memory of all co-set leaders you have a received word then you look for this received word what the co-set leader is of this co-set of the received word you subtract this co-set leader and that is your answer yeah well you know it's a code and also that the distance of this code word to the received word is the weight of the co-set leader which is the distance of the received word to the code so this indeed mirrors the code word decoder but of course it's not necessarily the code word send and uh you can have decoder errors but this is the important property in a q-ary symmetric channel with crossover probability small p you can compute uh the probability of decoding correctly and that is in terms of this co-set leader weight enumerator so that is in fact hardly know any examples well trivial examples but how did we compute it they simulate the channel and and then they computed this this so really general results are hardly known of course the alpha i is this number for up to half the minimum distance there's a unique co-set leader of course if you go beyond the covering radius by definition this alpha i is zero and the number of the this number is just a number of code sets right okay so just one simple example trivial example of the dual of the repetition code so that's um this is the parity check of the code so that is a code of dimension n minus one and minimum distance two and well what is the we can choose uh codes that leaders of this form lambda uh 000 and then lambda upper arbitrary fq but of course you could have chosen the second position also so there are a lot of choices of this code set for a coset leader of a code set and here it's easy to compute the code set leader weight enumerator of this code um but the next example is already not so trivial although the code is very easy to describe you consider the product code of two codes which i considered before c m and c n and the code words are now seen as matrices m by n matrices and the condition is that the the sum row the sum row is zero and the column row is zero that these are the parity checks that's that's all that defines the code of all uh code words are now uh in the space of m by n matrices and these are the parameters length m times n dimension m minus m times n minus m minimum distance is four what is the code set leader weight enumerator well for q is two and q is three uh put around the utomo uh did that in this p these these are these already quite complicated combinatorics involved but well it's an open question to do it for arbitrary q so that's exercise for you at the end of the week you have the answer of this i challenge you it's it's well it's much harder than you might think at at first but i hope someone is clever and gives the right way to look at it that's my hope and that's in fact my hope of this whole lecture that people consider and come up with uh yeah new ideas to tackle this code set leader weight enumerator problem uh well again what we did for the weight enumerator you can say well what about extending the code and then looking at the code set leader weight enumerator of the coach of the extended code and then you have the alpha i q to the m is then by definition the number of code sets of the extended code of weight i and of course if i is bigger than zero that uh then you consider it as a polynomial you can show that there exists such a polynomial that is a unique polynomial and that is uh divisible by team t minus one again and um and alpha i bar of t is then the alpha i t divided by t minus one and you have the extended codes that lead away the numerator of the code and it is this polynomial in x y and this t and okay so at least i should explain what uh what the geometry is in terms of the parity check matrix uh and i i think i should end there um and then um so if you have two cosets then the then the difference is a code word so uh but this term is called the syndrome of r1 and this is syndrome r2 so uh the syndrome of a received word determines its coset uniquely there's a one to one correspondence between cosets and syndromes okay so now we have a parity check matrix and uh so i suppose that it's not degenerate so there's no zero column there and well and then the weight of the syndrome with respect to h if you take another parity check matrix that weight can be completely different but we fix the parity check matrix and the syndrome weight is then the weight of its correspondent coset of the received word and well what is the what is now the geometry or the linear algebra of this the syndrome is a linear combination of columns of the parity check matrix and the syndrome weight is the minimal way to write a syndrome as a linear combination of this parity check matrix so and in this way you can compute the number of vectors that are in the span of i columns of h and not in the span of less columns right and now we are in business in at the computer coach at leader weight enumerator or the extended one but mind you in the in the projective system of these points so this defines a code these six points define the code and the matrix of this one and this one they are the same right so in the in the case of weight enumerators yeah well it doesn't matter it's enough information to compute the weight enumerator or the extended weight enumerator but here you have a problem because these lines three lines go through one point in this case but they don't in this case and that gives a completely different answer for the coach at leader weight enumerator right so these six points are the number of points projectively seen of coach at leader weight one and of two then you look at the line connecting two points and they should be on this line not these but distinct from these two points and then you compute so there are so well that you see here so in this case all lines all connecting lines have three points but yeah here this is a coach at leader corresponds to the coach at leader but in fact there are three choices right these two points these two points these two points giving three possible choices of course at leaders in this case there's not such a case so it gives completely different answer so what we then the use and I did that's well in you have a in geometric terms you have a projective system and then you look at all points in general position you connect them it gives a subspace now you get in this way you get an arrangement and this arrangement is we call the derived arrangement so it's really the projective system sitting in the derived arrangement so these lines that is the derived arrangement they are different and that gives you a way well theoretically to compute the coach at leader weight enumerator still complicated but probably well there is a theoretical formula for that so I should skip here a lot of slides and and more or less to what I promised in my abstract for the coach that sorry well I want to tell something about shield and the history of agi codes okay okay this this is so there's a lot of straightforward things you have a twisted cubic a rational normal curve in projective space three of degree three and then depends very much on q what the answer is there is a number which we call lambda q or mu q and depending on q being the value of one modular six what the answer is and and as I said you need the for this you need the classification of brun and Herschfeld and then go into that very deeply or well you have to check it meticulously and and it's not straightforward okay so as I said computing the weight enumerator at heart cosedly the weight enumerator is even harder already in the twisted cubic case what it is a genus zero curve right so the oh but it should be simple it's not and and for a degree larger than three I really don't know and this classification it's not known for ours bigger than three as far as I know and so I think you know you should have other ideas there so new ideas are needed it's up to you hopefully you will contribute and I thank you very much for your attention yes yes yeah yeah it's known because what is not the co-register of this code for q equal to yeah is a so so so called uh so-called uh gale-berlucam problem okay and it's not to be very difficult yeah yeah but um I you have this this code and the dual of this code so I've seen something like that but I think you're referring to the dual code but well let's talk about it I'm I'm interested what you have to say about this thank you so
In general the computation of the weight enumerator of a code is hard and even harder so for the coset leaderweight enumerator. Generalized Reed Solomon codes are MDS, so their weight enumerators are known andits formulas depend only on the length and the dimension of the code. The coset leader weight enumerator ofan MDS code depends on the geometry of the associated projective system of points. We consider the cosetleader weight enumerator ofFq-ary Generalized Reed Solomon codes of lengthq+ 1 of small dimensions,so its associated projective system is a normal rational curve. For instance in case of the [q+ 1,3, q−1]qcode where the associated projective system of points consists of theq+ 1 points of a plane conic, theanswer depends whether the characteristic is odd or even. If the associated projective system of points of a[q+ 1,4, q−2]qcode consists of theq+ 1 points of a twisted cubic, the answer depends on the value of thecharacteristic modulo 6.
10.5446/53536 (DOI)
Thank you very much. It's a big pleasure to be talking here today at AGCT. Of course, sadly AGCTs are not the same without Gilles and without Alexei. I think about Gilles Lachaud, there will be more talks in the forthcoming days, so I don't want to spend too much time on it. But let me just say, just by looking at the audience here with all these people from different parts of France and different parts of the world, where I know that in most cases Gilles Lachaud has played directly or indirectly a role in their academic career. I think it's clear that he will be deeply missed and he will be missed as a friend and as a colleague. So what I want to talk about today is basically about recursive towers and good recursive towers. So, as you all know, curves over finite fields with many points have been one of the main teams of the AGCTs throughout the years. And there have been various approaches to those, but I'll be mainly focusing today about recursive towers, which is kind of the topic I have been working on in the past. And I will try to give kind of an overview of what recursive towers are and how, like, some or how we find them, what are important about them. And at the end, I will talk about a recent joint result with Christoph Litzenthaler about good recursive towers over prime fields. Okay, so let me start. So I will not say too much about other ways of constructing sequences of curves, synthetic sequences of curves at many points, but I'll try to kind of hint at some of the connections. So throughout the talk, C will always denote a smooth, projective, absolutely irreducible curve over finite field FQ with Q elements. Alternatively, you could talk about algebraic function fields in one variable having constant field FQ, but I'll try to kind of stick to the language of curves throughout the talk today. And for the theory of curves over finite fields, as you all know, one of the main results is the theorem of Haase-Waile, which basically states that the zeta function associated to such a curve satisfies the Riemann hypothesis. And as immediate consequence of it, we get a good bound for the number of rational points on such a curve, the Haase-Waile bound, which basically says that for such a curve, if you look at the number of rational points, if Q rational points, then it can be always upper bounded by Q plus 1, where Q is the cardinality of the finite field, plus 2 times the genus of the curve times square root of Q. So the Haase-Waile bound usually comes also together with a lower bound. It just tells you how much this number can somehow differ from Q plus 1. But in my talk today, I'll be interested in the case where the genus tends to infinity or when we are looking at curves of relatively large genus compared to the cardinality of the finite field. And in that case, the lower bound becomes negative, which is kind of becoming meaningless. So I'll just state the upper bound for now. And so this is, as I said, one of the strong bounds that we have for curves for the number of rational points on curves over finite fields. There are various improvements, but I think it's fair to say that if the genus of the curve is not too large, it is a quite good bound. But what happens for large genus curves is kind of different. So this was noticed by Ihara and by Manin in the beginning of the 80s that if the genus of the curve is large compared to the cardinality of the finite field, then the Hassouvet bound is bad, which means it cannot be attained anymore. And then the question is, what are reasonable bounds in this regime of large genus curves? And to study that, Ihara introduced the following quantity. So it's now known as the Ihara constant. So for a given curve, C over a finite field, FQ, we look at the number of rational points. We look at the ratio to the genus of the curve. And we look at this quantity as we run over families of curves of increasing genus. And all of those curves should be defined over the same finite field, FQ. So this basically tells you, in comparison to the genus of the curve for a large genus what can you expect for the maximum number of points that you, what is the maximum number of points that you can expect? So this quantity just depends on the finite field Q. So it's denoted by A of Q. And if you look at the Hassouvet bound, if you fix your Q and you see that for large G, the first term Q plus 1 will kind of be not too important. And you see that the number of rational points can grow at most linearly in the genus. And the corresponding ratio is just 2 times square root of Q. So Hassouvet bound implies directly that A of Q is upper bounded by 2 times square root of Q. But of course, by the result of Ihara and Manin, we know that this could be improved for large genus curves. And that even gives you a synthetic improvement. So if you look at what Ihara's result would give you, it would give you something like roughly an improvement of order squared of Q2. So A of Q can be roughly upper bounded by square root of 2Q. That's a bit better, but just for the, for, to give you an idea of the order of magnitude. You can understand the ideas of Ihara were developed by Trinfeld and by Bladut. And they have shown that in general A of Q can always be upper bounded by square root of Q minus 1. And as of today, this is still the best known upper bound we have for this quantity A of Q. Yeah. Okay. So what about lower bounds for A of Q? So how would you construct lower bounds first of all? Or how would you obtain lower bounds? Yeah, one of the ways, the only way used until now is to construct in one way or another sequences of curves C i each defined over a fixed finite field F of, F, FQ such that you generate of these curves tend to infinity. And so that if you look at the number of rational points on those curves in comparison to the genus, this ratio should tend to something large. Yeah. So if you look at this limit as I tends to infinity, this should be large. This we usually denote by lambda. You call it the limit of the sequence if it exists, of course. And if you have such a family of curves, such that this quantity is large or at least positive, then you immediately get a lower bound for A of Q, obviously. But then the question is how to construct these curves which have many points and compared to their genera. Yeah. Okay. But finding such a sequence in one way or another gives you lower bounds. And of course, if you find a family where the limit is something positive, you get a non-trivial lower bound for A of Q. Such sequences we will call a good. And if you can find a sequence where lambda is as large as possible, so it would be, for instance, A of Q, then we will call it optimal. But of course, we don't know the value of A of Q. So I mean, this kind of is a weird definition where we don't know what the right hand side is. So sometimes instead of this, people just put the best known upper bound, namely the trimfet-flouted upper bound over here. Yeah. Okay. So the question is how do you come up with such curves that have many points compared to their genera. And I'm going to give a short overview of the known results. Ser has shown using class field towers, so using class field theory that in for any Q, A of Q is positive. In fact, even more, it can be lower bounded by some logarithmic term in Q. So there's absolute constant C such that A of Q is lower bounded by C times log of Q. So in any case, it is positive. So we have some quantity A of Q, which is in some sense universal. You can attach it to any finite field. And we know it lies in any case somewhere between square root of minus 1 and 0. And we know it is positive. And then there have been various results for other, for various small finite fields over F2, F3, and so on. So I'm not going to list them all, but many of the people contributing to this are in the audience, and I think there'll be more discussions about these later on. And then another lower bound is by Ihara and independently by Svastman, Flado, Densink. And the method used here is reductions of elliptic modular curves or of Shimura curves. And they have shown that in the case that Q is a square. So if it's even power of a prime, so L is an arbitrary prime power. We take a square, so we get a quadratic finite field, so to say. And the value of A of Q is known, and it's given a square root of Q minus 1. So it's just the quantity given by the Trinvett-Flauert upper bound. Let me also mention, I mean, since we are at AGCT and one of the C stands for coding theory, one of the nice contributions of Svastman, Flado, Densink was to show that using these curves that you construct, you can obtain long codes that are beating the Jibbal Varsha mobiles. So you get some long linear codes which have very good relative parameters. Okay, so quadratic finite fields, those are the only cases where we really know the exact value of the Ihara constant. What else is known? For cubic finite fields, there is a result by various people think van der Herren van der Flügt, and Bezerra Garcia and Stichternot. So this is in the case that Q is of the form L cubed, where L is again a prime power, and the lower bound they obtained was A of L cubed can be lower bounded by 2 times L squared minus 1 divided by L plus 2. So what were the methods used here? Sink used the generations of Shimura surfaces, and van der Herren van der Flügt and Bezerra Garcia Stichternot, they used explicit recursive towers. Okay. Okay. And this lower bound on the right looks a bit mysterious, but I'll tell you in a bit how it can be, I mean, in fact, I'll tell you right now how it can be kind of put together in a uniform way with the other lower bound we have for quadratic finite fields. The last thing I want to talk about is a joint result of myself together with Peter Bellen, Arnaldo Garcia and Tenning Stichternot. And this result holds for all finite fields except prime fields. So Q is of the form L to the power N, where N has to be at least 2. And then for those non-prime finite fields we have obtained the following lower bound for A of L to the N. I mean, we have the, just to compare, let me write the Drinfeld-Vladr upper bound. It has just given us L to the N over 2 minus 1. That's the best known upper bound we have. And the lower bound, I mean, that's the Drinfeld-Vladr upper bound. And the lower bound we obtain is just a harmonic mean of two quantities. One of the quantities is a bit smaller than this, and one of them is a bit larger than this. So this N over 2 is not the integer in general. So you have to once round it down. So you get a quantity that's a bit smaller than the Drinfeld-Vladr bound. And you have to once round it up. You get a quantity a bit larger than the Drinfeld-Vladr bound. And if you take the harmonic mean of those two quantities, that will give you a lower bound. So you see already it's at the kind of the right order of magnitude. So what's the harmonic mean? Just to recall, it's just given by 2 divided by the sum of the reciprocals of the two quantities we have over there. N over 2 minus 1 and L to the power the floor of N over 2 minus 1. I wrote them in the opposite order, but it doesn't matter. So one thing to notice about that is if N is a square, then so if Q is a square, so if N is even, you see that you do not really need those floor and ceilings over here. So you just get the harmonic mean of root L minus 1 and root L minus 1. So you recover the Ihara swastvambladdut-sink result. And if N is divisible by 3, so if you are in the case of a cubic finite field, then you just recover this mysterious quantity given by this sink bound over there. And for all other finite fields, except prime fields, you still get a kind of good lower bound. And what was the idea? In fact, one can obtain this result in two different ways. One way is to use recursive towers. That's what I will be talking about today, not only in the context of this result, but in general. And another way to obtain this result is to consider the Rindfeld modular varieties and you look at particular curves lying on those Rindfeld modular varieties. Okay, and let me also just mention that, I mean, just as the result over the quadratic finite field gives you long code speeding the Gilbert-Weisschamel bound for Q, at least bigger than 49 and the square, using this lower bound, you can construct codes over all non-prime finite fields that beat the Gilbert-Weisschamel bound as long as Q is not a prime. So it's a prime power and it's bigger than 49. And for 125, it does not work somehow. So you are beating using these curves, you can beat the Gilbert-Weisschamel bound for Q non-prime and Q bigger than 49, but Q125 does not work, unfortunately. Okay, so that's kind of the rough overview of the scenery, so that you see there are various methods of coming up with these sequences of curves with many points. And I'll be talking today about recursive towers. Yes, sure. What do you think is the right answer? That's a good question. So my hope is that it is this, because then it would be the best possible, but my guess is in fact that the right value would be the Trinvay-Flouted upper bound. That's kind of my feeling, but I mean, I talk to other people and I ask them what they think. Winnie Lee told me, for instance, that she thinks that this is kind of the main term, but you always have to add something logarithmic to it. And if you look at this for prime fields, I mean, this is not the result valid for prime fields, but if you would pretend you are in a prime field, this would just give you zero. And then kind of the idea is when this, the main term is zero, you're just left with something logarithmic, which is kind of recovering all the results over prime fields that have logarithmic lower bound in Q. But I kind of feel that we can still work harder and to reach Trinvay-Flouted in each case, but that remains to be seen. Hopefully in forthcoming AGCTs we'll know more. Okay, so I want to talk about explicit recursive towers. And I mean, one advantage of explicit recursive towers is that they are explicit. So in principle, maybe I shouldn't say this while there is a video recording, but those curves and the quotes constructed from them, I don't think in practice they are used really. I don't know. Maybe someone else can say something about that. But so if you really at some point want to use those quotes constructed from those curves, it's good to have equations for the curves. And that's difficult once you are looking at class field towers or curves lying on Trinvay-Flouted modular varieties or other kind of Shimura curves and so on. But if you have explicitly recursively given curves, you explicitly know their equations and you can really implement them. So that's one of the advantages. Another advantage would be those are much more accessible. You just, everyone can write down equations of curves and you can understand what they mean. But I mean, you lose kind of the intuition of what's going on. But I'll try today to show you some common future in all of those good examples of recursive towers that kind of show that there is in fact some structure behind them. So the idea of explicit recursive towers goes back to the 90s and I think was first thing who kind of tried to construct curves using such a recursive machinery. He did not really succeed but this idea was picked up by Pelikan who came up with some equations which turned out to be the right equations at the end. But I don't know what the exact story is but then he communicated this to Analdu-Garcia and Henning-Stichternoth and then they generalized those equations and did all the genus calculations and so on and they came up with the first proof of sequence of curves that performed good asymptotically given by explicit equations. That was in the middle of the 90s. And the main ingredient is in fact quite simple. You basically have two curves. Let me call them C0 and C-1. They are both defined over the finite field FQ. And in most cases that we know they are just P1. So you have to copy some of the projective line and we'll have two morphisms. Let me call them F and G from C0 to C-1. And that's all the ingredients that you need to have to construct your tower. How would you come up with your tower? So what you basically do is maybe I'll draw it over here. So you have your curve C0. You take two copies. You have two different maps. Let's say one with F, one with G to C-1. Then you can take the fiber product of these two curves over C-1. The fiber product will give you a new curve C1. And then you kind of do this iteratively. So starting with this C0 and another copy of C0 here, you again have the same maps F and G to C-1. Then you can take the fiber product. You get the same curve, an isomorphic curve C1 over here. So you repeat this at the base several times, all these maps, F and G. And here you have those C1s. And then you see here you can now have a correspondence over C0. So you can take the fiber product of C1 and C1 over C0. So here you get C2. And similarly here you get an isomorphic copy of C2. And then you can just go on like this. And let me just add here one more copy of C-1. You see on the left-hand side you'll obtain a sequence of curves. And this is kind of what we call the recursive tower defined by these two curves. And then the corresponding coverings over there. But then the question is, I mean as I said in most cases C0 and C-1 are taken to be just a projective line. But you could take something more complicated if you want. But then the question is what to choose as maps over here that you want to use to iterate. Maybe. Yes. Yeah, I mean it does not need to be irreducible at this point. So basically let me write it down. So they don't even need to be smooth. So I'm going to talk about that. So C1 will be basically consisting of tuples PQ from C0 squared for such that G of Q is equal to F of P. And in general these curves might not be irreducible. They will in most cases will not be even smooth. But what I'll do is I will kind of try to find morphisms here. First of all these curves, they will have some irreducible component of increasing genus that has many points. And I will of course rather than taking these curves, I will take their normalizations and as the maps I will just take the induced covers that you have between the normalizations. Yeah. Okay. I mean in general CM would be just a sequence of points P0, PM and C0 to the M plus 1 such that G of PI is equal to F of PI minus 1 for I running from 1 to M. Yeah. It's clear. Okay. Yeah. I mean, yeah, yeah. But I will take their normalizations and they... Yeah. But they might...the components might give you absolutely irreducible curves. Yeah. Okay. And the question is in fact when can you ensure that they are irreducible and how can you choose maps F and G so that's kind of the...you get a sequence where you have many points and the genus does not grow too fast hopefully. Yeah. Okay. So those are the kind of questions to ask so how to find maps F and G so that you could by just looking at the components if necessary get sequences of curves that have many points and so that the genus does not grow too fast. Okay. I want to kind of give you one example to show you how kind of such a recursive construction comes very naturally and that's kind of also important. So I'll give you the example of recursively given modular curves and it's kind of also an important example because as I will try to explain later it's believed that in fact all such special coverings that give you a good sequence have to have some modular interpretation. Okay. Yeah. Okay. Okay. So just as a very important example so fix a prime L and we consider the curves, consider the sequence of modular curves X not L to the end so I just take as levels powers of the same prime so N runs over all the natural numbers. So these curves are known to have many rational points when reduced at the prime not dividing L so at the prime P different from L but L keys has shown that in fact these sequences of curves can be given in a recursive manner as I explained over there. So that's kind of why I want to give this particular example. So what do points on these modular curves correspond to? So if you take a point that's not a cusp so if you take a point on let's say Y0 L to the end so a point on X0 L to the end which is not a cusp then we know that this will correspond to an elliptic curve together with a cyclic subgroup of order L to the end. Or alternatively we could say it corresponds to an elliptic curve E together with an isogeamy having a kernel isomorphic to C L to the end to the cyclic group of order L to the end so it corresponds to such a cyclic isogeamy any point and then as you know for a cyclic group of order L to the end you have a very natural filtration into subgroups right so you have C L to the end which contains C L to the end minus 1 and so on up to C L and corresponding to this filtration you will get a chain of isogeamy right so you have your elliptic curve E you'll have a L isogeamy to E modular C L you'll have another L isogeamy to E to the L squared and so on and lastly an L isogeamy to E L to the end yeah so let me give these new names let me call this curve E0 this curve E1 this elliptic curve E2 and so on up to EN over here so that's kind of what a point on this modular curve of level L to the end will correspond to it would consist of an elliptic curve together with the chain of L isogeamy. But then you already see kind of the recursive structure in this picture what you could do is you could go two steps at a time right so you can look at these sequences of two steps that you can that you can go so if you take a point on Y0 L to the end which corresponds to such a chain of isogeamy you can in fact associate to it a sequence of such triples of elliptic curves together with these two L isogeamy's yeah so you can you get a map from Y0 L to the end to Y0 L squared to the power N minus 1 and what you do is basically the sequence E0 up to EN you just map on to the N minus 1 tuple E0 E1 E2 E1 E2 E3 and so on all the way up to EN minus 2 EN minus 1 EN yeah and you see in fact you cannot take here any N minus 1 tuple off side sequences but you have to have some nice patching condition right here we have that the final part of this coincides with the initial part here the final part here so these last two curves with the isogeamy have to match with the next part over there okay okay and this kind of gives you the correspondence that I have talked about over there so if you look at X0 L squared you can have two different maps to X0 of L right on one hand so this will correspond to such a sequence of E0 E1 E2 connected with L isogeamy's you can on one hand map it onto E0 E1 together with the cyclic L isogeamy or to E1 E2 and kind of this condition here that in this N minus 1 tuple these somehow patch nicely together that this part here agrees with this part over here so that at the end you get a nice chain starting at E0 going to EN just corresponds to saying that if you look at that picture over there so you have X0 of L squared you have these two different maps and that would just correspond to the corresponding fiber product over here so here you would have let me call maybe this map F and this map G so you would want that the image of F which would mean you take or I should do the opposite way I'm sorry this should be F this should be G you would want that you take such sequences such that if you take the first part of this it should the last part of this it should agree with the first part of this map over here and then you would be kind of constructing these L cubed and so on in a recursive manner yeah okay and you know that after reduction modulo prime not dividing the level you will have many points coming basically from the super singular points on those curves yeah okay so this was just to say that in fact looking at those recursively defined towers is something very natural yeah but then the question is I mean so modular curves will give you equations that will behave that will behave nicely but how can you come up with other equations how can you come up with other maps F and G that to give you a nice asymptotic behavior that has been a very difficult question so people have been looking for ways of finding nice covers and so on and then people came up with new equations and what happened was that in each case elki says shown that for the sequences of curves they obtained there is some higher reason why they behave nicely so there is some modular reason behind there's some modular interpretation so to say of what's going on there and this is something he didn't call it a conjecture he called it a phantasia he just said that if you take any recursive tower which is optimal which behaves good in a asymptotic sense then there should be some higher reason why it exists it should be in some sense modular so that's kind of known as elki's phantasia saying every optimal recursive tower is modular okay so now I want to kind of after I hope I could convince you that those are natural objects I want to show how you can kind of if you forget about the modular world how you can come up with kind of reasons how you can come up with good such maps yeah what would you look for when you are to choose these maps F and G so that things behave very nicely so you have to have two things satisfied I mean first of all you need of course that these maps give you sequences of curves or of things that have irreducible components that are curves of increasing genus that's one thing you will want to have many points on them many rational points and you will want that the genus of these sequence does not grow too fast yeah so let me denote by pi m the map from Cm to C minus one and so the first thing is how do you ensure that you have many points that's the first thing how this is done is in fact so here you have from level m to level minus one you have the map pi m and in all cases that I know of good towers what happens is that you're here you have some rational points which split under all those coverings pi m yeah which means if you look at the inverse image under pi m you will only get rational points and the number of rational points will be exactly equal to the degree of the corresponding covering yeah so that would be for instance a sufficient condition to have many points would be there are points in C minus one fq so the rational points down there which split under the map pi m from Cm to C minus one for all m yeah that would be one way of kind of ensuring that you have many points so now let's try to see what this would mean in terms of the covers f and g what this would mean in terms of the covers f and g so one way you could ensure this would be if you take a point here that is to be splitting in all those extensions you could ask for it so what I want to do is so this graph is kind of called the pyramid associated to the tower that you have updated on the left hand side of the pyramid right and basically the idea of using this pyramid is to kind of say everything you want to say in this asymptotic sequence over here but just looking at what happens down here yeah so you just want to look at this p1 p1 and the two maps down here yeah so if you want to have a point that splits completely how you could ensure that would be you take a point that splits over here so that if you look at the images under f of the inverse images under g they should again split in this extension in discovering over here yeah because if the image here splits in discovering the inverse image here will have to split over here yeah so basically what you want is you want rational points here that split under this map g so that their image the image of their inverse images under f again split in this extension over there yeah so you want for instance you could ask for the existence of there is a set s let's say consisting of some of the rational points of the bottom curve such that if you take any point from s it should split in the map g from c0 to c-1 that's one condition and you would want that if you take one of the inverse images so if you look at g inverse of p and if you push them down by f this should again lie inside this should again lie inside your set s and hence it should split again yeah so the set s should consist only of splitting points and if you go up and down again towards the right you should land inside the same set yeah so this is kind of what is called to be forward complete so we'll say the set s is forward complete if you start in s and you go towards the right you still always land in s yeah so and this would ensure that any points in s would split in the whole power for all maps by m right so this would in particular imply that if you look at the number of rational points of the curve of at level m it will be lower bounded by the carnality of s the number of the points that split times the degree of the map of the map by m yeah so you can kind of if you have such a s you have first of all ensured that the number of rational points kind of grows linearly in the degree of the corresponding maps yeah okay so that's how you would have one way to obtain many okay that would be one way of obtaining many points and the another thing that you want to control is the genus you want that the general of the sequence of curves and should not grow too fast so that the ratio of the number of rational points to the general tends to something positive yeah so in most cases what will happen is you will have a point here which will be totally ramified in all the tower yeah so it's kind of a technical thing that I didn't want to say so if you have a totally ramified points you will know that they will all be irreducible yeah sorry about that yeah I should have said that but I mean yeah yeah yeah yeah I mean the thing is there are some towers where there is no totally splitting point but in that case you have irreducible components you know that of increasing genus with many points yeah so it's a bit technical so I didn't want to kind of yeah you're right in pointing out so it's yeah sorry yeah yeah that would be another way that's yeah I could have done okay okay okay so okay so so let me assume let's say that there is a totally ramified point and then the theorem that I'm going to state at the end there will be one yeah okay so to kind of estimate the genus what you use is you basically look at Freeman-Hurvitz formula right so you consider the covering pi m from cm to c-1 you you look at the ramification here and try to estimate from the ramification behavior in those maps the genera of all of those curves that you have over there yeah and again one way to ensure that the genus does not grow too much is I mean here you have an infinite sequence of curves and a priori there could be infinitely many points that eventually split in the tower right but what we want is that the only that eventually ramify somewhere in the tower but what we want to ensure is that down here you can find the finite set of points that you can contain you can find the finite set that contain all the points that at any given point might possibly ramify yeah so that would be kind of a way of ensuring that the genus does not grow too fast yeah so a sufficient condition again would be to find to ask that there are only finitely many points in the bottom curve which ramify in some pi m and so the kind of all the ramification is restricted to a finite set of points and of course in characteristic p you might have like cases where the ramification is very bad right you might have a very big different exponent and so on so you want kind of that the ramification is in some sense weak it should be tame or at least the different in each case should be bounded by the ramification index yeah so you ask for some weakness of ramification which basically means that if you take any point that at some point ramifies then the corresponding different exponents should be some are bounded by a constant times the ramification index yeah okay so how would you ensure that on the picture again you can exploit the fact that you have a recursive structure so you have lots of symmetries if you can control well what happens at the bottom step you have a good control of what happens as you go up here yeah so if you have a point here that eventually ramifies at some point over here then you know that the image under the sequences of maps down here will give you a point that has to ramify over here yeah so if a point down here ramifies eventually it will ramify in one of those extensions in one of those map coverings over here and then you could ask that anything that ramifies here should once you trace back and see where you might have possibly started here land in a finite set yeah so if I have a finite set of points that contain all the ramified ones and that are kind of invariant or that are undergoing now towards the left by taking like an inverse image under f and down by g again if you have such a finite set containing all the ramified points that is so to say backwards complete you would have ensured that there are only finitely many points here that ramify at some level yeah okay again this would be just one way of ensuring it yeah so you want to have that there is a finite set let me call it r at the bottom such that the following holds so r contains all the ramified points of the covering g to from c0 to c-1 and r should be backwards complete so I forgot to say here maybe I'm sorry about that I should have rather than writing p here I should have used s right and similar here in this case since you want it to be kind of a left invariant undergoing towards the left you want that if you go up by f so if you look at f inverse r and down by g you should again land inside r yeah so this is what we will call backwards complete and then these finding f and g so that you have splitting points that are forward complete and that you have a finite set containing all the ramified points that are backwards complete is kind of the main ingredient that you want to have yeah so here you will have also some weakness assumption that you need to satisfy but once you are given that you immediately find a lower bound for the corresponding sequence of curves so here we had seen that the number of rational points grows linearly in the degree and in this case by just applying Riemann-Herwitz you see that the genus will also grow at most linearly in the degree so if you work out what it is it is just the genus of the bottom curve minus 1 plus 1 half times the cardinality of this backwards complete set of ramified points times the c the c is kind of the constant that tells you that you have not too bad ramification over here times the degree of recovering by m okay but what happens in fact in most examples that we know is that you have such a set r and you have such a set s which are backwards and forward complete but they are also i mean the set r is also forward complete and the set s is in fact also backwards complete and i mean there are some nice things going on over here so this was first noticed by Lenztra he somehow stated it in terms of some kind of identity that you have satisfied and then if you want you can see this backwards completeness or forward completeness as given given by some structure on certain graphs and so that was kind of worked out later on by Peter Bailen and Alon and Peret and in all examples in fact both r and f are both forward complete and backward complete okay so how would one ensure such a thing i mean the last thing i want to talk about is this joint result together with Christoph Litzenthaler it's a sequence of curves over prime fields that have that are given by explicit recursive equations and that have have a positive limit yeah that are good so in fact for many years people have been looking for explicitly given recursive towers over prime fields and for decades none was found and me myself at the end even i was very convinced that such a thing might not exist yeah so for me it was a big surprise to see that it does in fact exist and what's more surprising is in fact that it is some very easy construction so this is some joint work with Christoph Litzenthaler so what you do is you take p1 and you take a point q on p1 which is in fq squared but not in fq right and then you can look at the automorphisms of p1 defined over fq that's kind of fixed this quadratic point q yeah so you look at the isotropic group of q this will be so the isotropic group of q will be is a cyclic subgroup of the automorphism group let's say of pgl2fq of order q plus 1 so this is what is classically known as a singer cycle or a singer subgroup at least in the case of gl2 and then you can quotient out by this automorphism group so you get a cyclic covering of degree q plus 1 to p1 again and of course in this cyclic covering how will the ramification be so the point q will be totally ramified right so e will be q plus 1 and since everything takes place over fq and here q can be anything for now it can even be a prime you see that also the conjugate of q over fq will again be totally ramified and then by Riemann Harvest you see very easily that these are the only two ramified points because you have p1 here p1 here and these two will already give you all the ramification so if this gives you already all the ramification you see that on top you have q plus 1 rational points so the action of this order q plus 1 subgroup over here has to be transitive on them which means if you look at the rational points that you have up here when you go down they will be all collected over a single rational point down here and then so this is kind of the basic extension that we are looking at so this map over here will be what I previously called the f and then you'll have to find the g so that you have together with this f and g you really have a finite ramification locus so you have a finite backwards complete set R containing all the ramified points and you also have splitting points so you want a forward complete set of points that will give you that will split hence split in all the corresponding extensions and just need one more minute and that's kind of satisfied very easily namely a simple observation is that so what what will be R in this case for R let me say what you can take up here so the inverse image of R up here you can just take to be the set of all fq squared rational points right that you have but not the fq rational points right and you can figure out what is the R that lies under it so which is f of R prime oh sorry yes should be fq right so you take all those points up here and below you will have some set that we will call R and for your S you will just take all fq rational points up here and their image sorry S prime and their image what I had called S before will be just the single point that you have here let me say it's infinity for instance you can choose according to coordinates so you will have just one point down here right so that will be the corresponding inverse image and then what you want is you want that when you go up and down the things speak stay kind of invariant they want they should be forward complete for the S and backwards complete for the R and one way to ensure that is by choosing the G just in a simple way by making it look a lot like the F but you just compose it with an automorphism above and an automorphism below yeah so you take psi and phi so psi will be an automorphism of the curve above and phi an automorphism of the curve below and you want that R prime and S and all of them should kind of be invariant under those yeah and one way to ensure that is you can verify very easily is that if you ensure that this R prime is left invariant under psi right and if the set S prime is also left invariant under psi and if the two sets R and S down here so are left invariant under phi so if you have phi of R is equal to R and phi of S is equal to S for any for suitable choice of automorphisms this will already guarantee you that the two sets are forward complete and backward complete right but now these conditions are very easy to satisfy because any automorphism defined over FQ will really leave this invariant and will leave this invariant to any automorphism down here leaving this invariant will give you one condition you basically want to have a linear map and from here you will get the condition so you just have to check whether you can satisfy these conditions together with the requirements that you have a completely ramified that you have a totally ramified point in all of those extensions and you can check that this can indeed be always ramified and you obtain the following that this can always be satisfied so you obtain the following theorem so if for q at least three there is always an explicit recursive tower over FQ for all finite fields FQ with limits lambda lower bounded by 2 over q minus 2 so let me maybe point out that this does not work for F2 and F3 and also it's a quite bad lower bound yeah so you see as q grows it goes to zero it does not improve any of the known bounds yeah that's the bad thing about it the logarithmic lower bounds by ser are better than this in any case but the interesting thing is it is explicit and there seems to be something special going on yeah so a question would be once we know that these towers over prime fields exist can you find better ones can you find ones that work over F2 or F3 and since there seems to be something special going on another question would be is there a higher reason for what's going on here is are these towers maybe modular in some sense and so on so I'll stop here thank you so if you if you fix the finite field then you fix the curve C0 and C1 this is the finite problem to find all possibilities that have and a long time ago Ali Maharaj did some kind of search yeah as it's been repeated I mean I think many people have looked with like exhaustive search in families but you kind of have to look for what you have to kind of decide a priori what you're looking for and then you will find all candidates satisfying the conditions that you have I mean for instance this condition that F and G are so simply related by just twisting with automorphisms that need not be the case for all right so it's basically you need to know what you are looking for and then you will find something and I mean I myself did lots of computer search and I never came across a tower over the finite field with five elements even which is already very small so it's if Q is five the extension degree will be six there is indeed not much but I mean it's always difficult and tricky to program it rightly and to make sure that you really check all the cases and so on so I mean even to verify that in the end you get a tower is not so automatically programmable you can kind of see whether certain things are satisfied but those are kind of difficult to convey to a computer but I guess this just slipped all the searches is my guess which was a surprise for me also so in this case it's only tamedly ramified so there is not much to I mean you ensure that that's I mean the ramification the points that ramify will only be those in the set are prime over there and all of them will be tamedly ramified so in this inequality that I've wrote the C will in fact be one in all the cases so this is really a very simple tower there is no wild ramification going on nothing I mean it's kind of for me if someone would have told me a year ago such a tower exists I would have said no if it would then someone would have found it either by computer search or by just looking at this very natural covering so this is really a philosophical question did I understand correctly that your view or at least guess might be that for for q prime that a of q should still be square root of q minus one that's my guess but I guess that's like these recursive towers they are kind of very special if you want to kind of construct them if you want to come up even with a modular construction that gives you something recursive it gives you lots of restrictions so it could be that this is the best you can do with a recursive tower but I mean if you forget about the condition of being recursive and having all the symmetry with the pyramid and so on you could do much better I guess but yeah I guess that was really my question I mean if you believe Elki's Fantasia that these recursive towers are all modular in some sense and you believe that the many points on the modular curves are really all coming from super singular points I don't see how you're ever going to get achieve that bound using a recursive tower yeah yeah I mean it might be something non recursive in that case it's okay we have time for final question um is it is it known in theory that you can solve the the decision problem in which you're given the the c0 with the map to c minus one and your task is to decide whether there exists some tower where you choose the you form this fire product component I don't know of any result decide whether there is a there is an odd a good a slightly good tower like this yeah it's an interesting question but I don't know of any result in that direction so my guess would be it's no one might have looked at it until now but I wouldn't know it it's not the question I had talked about before okay let's thank the speaker again
Curves over finite fields of large genus with many rational points have been of interest for both theoreticalreasons and for applications. In the past, various methods have been employed for the construction of suchcurves. One such method is by means of explicit recursive equations and will be the emphasis of this talk.The first explicit examples were found by Garcia–Stichtenoth over quadratic finite fields in 1995. Afterwardsfollowed the discovery of good towers over cubic finite fields and finally all nonprime finite fields in 2013(B.–Beelen–Garcia–Stichtenoth). The recursive nature of these towers makes them very special and in factall good examples have been shown to have a modular interpretation of some sort. The questions of findinggood recursive towers over prime fields resisted all attempts for several decades and lead to the commonbelief that such towers might not exist. In this talk I will try to give an overview of the landscape of explicitrecursive towers and present a recently discovered tower over all finite fields including prime fields, exceptF2andF3. This is joint work with Christophe Ritzenthaler.
10.5446/53537 (DOI)
First of all, of course I'd like to very much thank your organisers for inviting me to speak here. convey bwyd dhaeg yr astiast têmbach a llei'r影 Nico smeig ers ac gw'isetingau am y ph HI USA cwm SEC didd-giadr hon diseasell yr cynllunynt bahadwm diolch am y Startinllunion Criartrющo youliadau mynd i'r obiddiadau oech yn fwy llwy fyddeexpien i with funding in one way or another and I would like to say that I very much hope that the one in the middle will continue to be able to fund people like Nuna to come and work in Warwick and many others for many years to come. Okay so I'm going to talk about congarances between elliptic curves and and their symplectic type, and there are of course 2 sort of sides to the story I'm going to tell you, permitted usอะ i ddechrau. Ond gandig o witha arkgord o bandrabi a si Zhao a sy'n edrych ar gyfer y embas profi diffuseArion. Do quite a ll救 iddiw Showoff arae ac yn Cel, Iel'r ardercydd sy'n rhoi Pesaf a i�n amlygo'n dymrone bêithol maen gwadyl a'i cam dim yn es euh crochet naveads ar yr oed Dweann desperty paeddon a i'riaeth dylai bыпны~! Be i'r rhai maen nhw ei groffili gennych hefyd aihoo i gref yr facpdawn bwyllt жennydd i'r awr evilf am medbylch mewn i'r Cyngorau Farn 對ysg ac yn y mae hynodi i wy cafe meddwl y dy OM ac mae'n pryd, stori cael hike ac ma y tro byddio ac nid i gadaeth. F fucked the where mor f byteb, mae'r nithaufen он어� fyddwn i'r上面 ES gyda'r cyfnod a'r defnyddio, ond nid ydych yn y rŵm yn ymddiadau o'r cyflwyno a'r gyflwyno ar gyfer ymddiadau, ond y peth yn y rŵm yn ymddiadau. Mae'r rhain yn ymddiadau, ac mae'n rhaid i'r cyflwyno. Ac rwy'n ddweud yn ymddiadau o'r ddweud o'r ddechrau a'r ddweud o'r ddweud o'r ddweud a'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud, ac rwy'n ddweud o'r ddweud, o'r ddweud o'r ddweud, rwy'n gweithio i weithio'r rhain o greu cyflwyno ar gyfer y mewn ddechrau a ddweud o'r ddweud. Ok, felly, â ddweud, rwy'n gweithio gynhwys a ddweud o'r ddweud. Rwy'n gweithio i'r ddweud o'r rhain o'r rhain, o'r ddweud o'r ddweud, yn y gyflwyno'r gyflwyno, mae'r ddweud yn ddweud o'r gyflwyno, Ond rydym laf, fel cwyeilio'n not urgently honno Und Opposiblaryt. Rwy'n ellos iddeithas drwy De Magic Chef yn fan now explanation – Ty downod cyIntro, gy boiler. Efallai, roeddo chi, ddim yn gwelun cy Swyll â roedd Minister C cyrdiff yma yn faddwyd LFI bwrw L crownhu Gŵr ac isox ilyt져f atravésf Sen yw clanol CSorthified Art, y nog process rai fyddai'r bwysig hwn Command persons ar cynllun Meddyno Cymru. Dwi'r peth lunwyr eitbwysig, cyfeliwch increase deilig o ran mawr i ddoints i rhai rhai, ièresib y bwysigst MB i drifiadarr Cliso Guck. i a'r stwyfodau yn ystod o'r Llywodraeth Llywodraeth. Felly, y ffwrdd ystod yw'r Prif, mae'n ddweud y ffyrdd yma. Mae'n ddweud ystod o'r stwyfodau yn ddweud, fel mae'n ddweud gawr acion. Mae'r ffyrddau ellypciol yn ddweud o'r ffyrddau'r Llywodraeth, ond yna, yn y ddefnyddio deimlo, mae'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r Llywodraeth. Mae'n fod yn amlwg â'r ffyrddau a'r gawr a'r ortomorffism. Dwi'n ddweud yw'r mod P gawr ymgylchedd, mae'n ddweud yn ein GK o'r ffyrdd gawr a'r ortomorffism oherwydd mae'n ddweud o'r ffyrdd gawr a'r ortomorffism oherwydd mae'n ddweud o'r holl dechydig o'r ffyrdd. a y yo noddioroeddoli So, yn pentecafu, mae wir yn gewad ni'n adm yw'r dayaeth fel Gaelwygon. Ond beth sy'n gwahanol tystiolaeth, Cymru yn eisiau meddliadfyn hwnnw, a dweud hynno'n cyffradd failing Ch differences. Metodol y byddMA yw Llyrgridd Joe Lly dor dementia, a cynnythümüch leoedd draw fy cor�us yn hyn ag wed installing A ym gyllideb bod'r hyddau' a'r fath o'r fath o'r fath, mae'n gweithio'r fathau yma, a'r rhobar yn y fathau, ac mae'n fathau yn y grwp GL2 o Ffp. Mae'r mod P-Gaulau, ac mae'n gweithio'r fathau, yn ymgyrch yn ymgyrch yn ymgyrch o'r fathau o'r fathau. Mae'r fathau yn ymgyrch yn ymgyrch yn ymgyrch, mae'n gweithio'r fathau o'r modul Gaulau. Mae'r modul sydd wedi cael eu cyfnodau sydd o bwysig a'u dgylcheddol, o'r fathau o gweithio. Mae'r modul Gaulau, am ffynir, yn ymgyrch yn ymgyrch yn ymgyrch yn ymgyrch, ond mae'n gweithio'r fathau, oherwydd y fathau yn ymgyrch, oherwydd un o'r fathau, oherwydd e-sub-eb, oherwydd gwneud y fathau, ond mae'n gwneud ymgyrch yn ymgyrch yn ymgyrch o'r fathau yn y unithrwys iawn sefydlu. Yn mynd i'w wook hysi ein pryd o pobl fewn i'w golyguogiriad am eu right arall, mae, buddwyd, wrwing nawr i fyng GOAL McLennan wokat jazzwydd. Felly mae'r unithrwys iawn yn gweithio dawnol yn y paras. sydd wedi codi am y ma على ac yn 1930 pe ein lle syncund. Felly mae'r unithrwys iawn yn umfyrdd iawn. Fy siad bod d�ionoedd o'r prosiectaw yr unithrwys iawn gyda'r unwyd politiquei wedi gwneud bod yma. Felly mae'r technol a bod hynny ditwch yn uwch, felly, mae'n ddweud y pwyntau o'r rwyth i'r unedau, mae'n ddweud. A'r hyn o'r cyffredinol, y gawa modul, mae'n gawa equifariant, felly mae'n ddweud y pwyntau, a mae'n ddweud y gawa automorffosum sygma.....a'r ddweud y ffeyrwyr, mae'n ddweud y ffeyrwyr, mae'n ddweud y ffeyrwyr, ac mae'n ddweud y ffeyrwyr, mae'n ddweud y ffeyrwyr, mae'n ddweud o'r hanghs, a allan yn gwneud ein gwneud, ac yn ddal le'r f sıvw tol own ni'n ddweud yn jarryw, ac bydd amgylchedd y rhanie be candidates, er mwyn tynnau ma recognised i ni de Zurffydd, ac mae'n gwneud i mi at���fynwyd i chi, bydd hmmlyneth. Aeti isi gawr agmarfod growl, rydw.'' I gweimdi Timeubol will ticklid. To diwethaf y ddefnydd rockshfer o gyda'n berem busmrestau astag�� HRW gweithio. W goodness, tawd y cynllun yn gwneud fy gafod diwersen, Taeth ei gynnig wedi gynnig y pewch fod yn y meddorol llawer i gynghwladio Be Palace Anhig yr E פהn i 김 Ya-ell yn unig Aberk 54 agmarwyd 21 ac 25 pleger Chyn bize Mi 10 o Ben 11. 18 Sauce I2 a I2 oddyn nhw'r prym. Al wrth g 환 sie ddim mae'r sprn cyfrwg iaith Gy recycling mawr P i'rычynuiau gyda'r byd sefydan o phaith a lludio'r faldgliad? Ministerial Gweddyn i gweithio bod ei gynllen o'r haledumur chi i youthainexol Mae gennymdei bram yn defnyddio peisio ar fightersiau ar embraced y mae chi'r rhagwlad p bod y bob arfer y gwestiyniad thorconf Parade o modi, mae第lau rhywboedd parade afulw months.�� haus i modu Cymru wedi golodo'r eraill achos un'r Llywodraeth yn gallu nodi'r ll investors. Dyma'i celiad y Cymru er ble e limitations assemble heb recolio maen nhw yn rhan gy Chamrodd i bryd i na ymlaen Gyne 아이 Gwyrd. Pwyd bodaywn yn gweld weithio, a ag i chi, a'rny�lame'r gyllidau F fold yang e1 a e2 gyda un ffadaeth hwoedd phod Genlynedig neu y pom hwn gwneud o cas yna, fel ein cylarifau ar gael yn agig fi, rydynt belrif cael hostiu Rydw i'n meddwl, ydych chi'n dweud o dwy ffawr, dwi'n meddwl o dwy ffawr i E1. Rydw i'n meddwl y Liniwmae, dwi'n meddwl o dwy ffawr i E2. Rydw i'n meddwl i'r meddwl i E2. Rydw i'n meddwl i'r meddwl i E2. Rydw i'n meddwl i E1, dwi'n meddwl i E1, dwi'n meddwl i E1, dwi'n meddwl i E1, dwi'n meddwl i E1. Rydw i eich geisgwyr Fyllgor i Byddai roedd be cookahol y gallwn o'r Mhwgunaredd taladces 360. Yw ddiolch yn solon o'r panai homoeth? A αν neu Hyllid Ddefa'i que swollen o bobl Adam Menchynnwys? Arfyn taxa B, dyw un突-no'r rhain nreadad. On callwn ar gerd, dyna L- highlighter yn ei aktuell iawn, fabryd yn gincwyert Asia 놀� yn gwaith gwreid. Myndi gweld Yun Hmer nurendwyr Down neitherered, rytyn ni yn sabeywch онr. Rydyn ni wedi diogel ac nywed yn llefalled. === OK. OK. So now it is it is possible that between the same two curves with the same prime P, you might have as both a symplectic and an anti-symplectic of congruence. But if that happened, then you by composing one with the inverse of the other, you'd have an anti-symplectic autoborsem of E1P. And actually that can't happen in the case that E1P is an irreducible representation because by Schuhrslemmer, if E1P is irreducible as a Gaowa module, then the only automorphism has a scalar multiplications. And you will see that if phi is just a scalar multiplication, you're multiplying both points by the same scalar, so the d is the square of that scalar. OK. Right. So there's one situation in which it's very easy to construct congruences, which I want to kind of explain so that we can in a sense eliminate it. And that's when you have an isoginy. If you have an isoginy between E1 and E2 and as long as the isoginy degree is co-prime to P, then the isoginy induces a map from the P-torsion on E1 to the P-torsion on E2 that has all the properties we want. Assuming the isoginy is defined of a K, this will be a GK invariant map. And there is a very easy criterion for the P-congruences that arise this way to say that they whether they are symplectic or not. They're symplectic just if that according to the degree of the isoginy, whether it's a quadratic residue or not. And I'll prove that to you because we're supposed to always prove something in a talk and probably this is the easiest thing. So you just do a little calculation. It's based on V reciprocity. So if you apply the isoginy to P and Q and then apply the V pairing on E2, then by V reciprocity details you can pick up this isoginy and put it down over here in its dual. So if I hat is the dual and if I hat times phi is multiplication by the degree and so by linearity that gets raised into the power. So this is an example of the formula on the previous slide and the quantity I call d phi there is just the degree of the isoginy. OK, so if we're working with P equals 7, then two isogines induce symplectic congruences whereas three isogines induce anti-symplectic congruences. OK, so now we can refine our question and say, OK, apart from isogines, are there any other congruences? And the answer is yes, but as P gets larger, they become rarer. And it is conjected that they die out altogether and I'll say a bit more about that now. So that's the subject of the Freymeiser conjecture. So there are very slightly different forms of this. I won't go into it in great detail. So this is stated over Q. You could replace Q by a different number field if you wanted. So in this uniform version it says that there is some constant depending only on the field, this case Q, such that basically the only mod P congruences between elliptic curves arrive through isogines for P greater than that constant. OK, so for almost all but finally many primes, there are no mod P congruences except those finished by isogines. And there are various ideas as to what the correct value of this C is, but the conjecture hasn't been proved. The non-uniform version of this would fix one curve E and ask for a bound depending on the ground, depending on E on mod P congruences between that curve and all others. And this is the uniform one where you have to let E1 and E2 both vary. OK, so as we'll see over Q, mod P congruences that don't arise through isogines exist for all primes up to 17 at least. And one theorem that we have is that if you restrict to elliptic curves of conductor less than 400,000, which is perhaps rather a pathetic thing to do because that's only a finite set of curves, right? So you could just compare each one with every other one, but our proof is slightly more slightly clever than that. But in this range, which is all the elliptic curves in the LMFDB database, the elliptic curve part of which used to be called the Cremonid database, but isn't any longer. All in our database, there are no congruences modulo any prime greater than 17. But for all primes less than 17, there are less than or equal to 17, there are. And I'll say something about how we prove that a bit later on. So as I say, there's quite a lot of history behind this Freibhmeiser conjecture, which I don't really want to go into. Some people have been known to conjecture that it actually holds with 23 because they seem to think that there is no, you know, the world wouldn't come to an end if you found a 19 or even a 23 congruence, although nobody has found one, but somehow if you found a congruence modulo, whatever the next prime after 23 is, then somehow something more serious would happen. So for the small primes P for reasons I will come on to in a minute, like for P equals 23 and five, they're really very, very common. And so we kind of, well, I'll ignore P less than or equal to five almost entirely. And then as P goes up through seven, 11, 13, 17, they get fewer in number, as I'll show you very explicitly. By the time you get to 17, there's essentially only one and it's associated with these two curves. And if you ever get a chance to follow through this talk on the PDFs yourself, you will find that whenever I give an elliptic curve label, I haven't tried this on a Macintosh. I don't know how Macintoshes work. This document is trying to connect to. Is that OK? Yeah, OK. Anyway. OK, so every elliptic curve has a homepage and this is the homepage of this curve. And in fact, this curve even knows that it has this nice property because right down at the bottom, it actually says, you can't read that, it's too small, but it says that this curve is mod 17 congwit to another curve. Anyway, oops. OK, so right, so now I'll talk about how we find congresses in the database both for individual primes P and as suggested by the previous theorem, somehow for all primes, which clearly we didn't do one at a time. And so here's an idea of how big the database is. And of course, I would encourage you all to click on things when you find you can, because that's the link to the elliptic curve database for elliptic curves of a queue at least. And we've got two and a half million curves and up to isogeny about one and three-quarter million. So one thing we don't want to do is look at curves one by one. We don't want to look at all pairs of curves and say, are they congruent and modular? What? Because one and a three-quarter million choose two is too big. So we want to do something else. We only want to find congwrences between non-isogenist curves. So we'll start up by picking one in every isogeny class. And we're going to start out by comparing the traces of the representations. So the traces of the representations are just the numbers A, L, of E that appeared in Alina's talk. And if the two curves satisfy a p-conggwrence, then you know that the Al of E1 and Al of E2 are going to be congruent mod p. And I put here, here we're going to exclude the primes dividing p and both conductors, because as I think Alina even said, that if you have this congruence for literally all p, then the curves must be isogenist. But this is something we can check. So if you give me two curves E1 and E2, then just by looking at for various primes L with this restriction, just look at the difference Al of E1 minus Al of E2. And if it's non-zero, then it only has finitely priny primes dividing it. And that gives you a restriction on p. And then you can do this for lots of ales, take the GCD of all these differences, and you'll soon find out that whether there is a congruence or whether there isn't a congruence or not. But we don't want to do this for all pairs of curves, there are too many pairs. So we want to do something where we can somehow do a pre-computation once for each curve and then somehow do the comparison afterwards. And one thing that makes this a bit difficult is that somehow if I store information about E1, then later on I'll get into trouble comparing to compare it with E2 because E2 has a different forbidden set of primes. And there's a very simple idea that allows you to get around that based on the fact that all of our conductors are less than or equal to 400,000, and that is simply that we only ever use primes L that are greater than 400,000. And if L is greater than 400,000, it certainly doesn't divide N1 or N2, and it won't divide P either because I'm never going to do this for a P bigger than 100. OK, so that's the first idea that if you use large L, you can sort of ignore this condition here. I was very pleased when I had that idea. So, you know, as you get older, you get pleased by simpler and simpler ideas occurring to you. OK, so we use a kind of sieving procedure together with a hash function. I hope that's not too kind of computer science-y for this audience. So you think of a number B and you take the first B primes bigger than 400,000, and for each B, so for all those primes you compute, and then we take one curve at a time, this first bar's through. So for each curve E, you compute A L of E for all these L's in this finite set, and then the bar on the A L of E, I don't know, this thing doesn't really work properly, does it? OK, in that summation you see there's a bar on top of A L of E. The bar here just means reduce mod P to the range 0 to P minus 1, and then you put those into the digits of a number in base P. So that gives you an integer, positive in the French sense, integer. So for every curve and for every P we get this integer, and if two curves were mod P congruent, we would get the same integer. So we regard that integer value as being the hash of E at P. So any two P congruent curves have the same hash value, and we would hope that if we take B not too small, we wouldn't get any false positives. You'd get a false positive if the A L's for all the L's in the set you were using were congruent mod P, but some of the other L's weren't. OK, and another nice thing about this is that we can parallelise it, because if you do this for a single prime P on the database, it takes, I don't know, half an hour or something to compute all these, even without trying very hard to optimise things, I'm sure Drew over there could get it down to probably one minute, but it's not necessary because we only really have to do this once. How we parallelise it, most of the time taken in computing these hatches is computing the A L of E's and you only have to do that once. That takes most of the time and then very quickly you can reduce mod P and put in this formula. You can do lots of P's in parallel and as you'll see later on I'll be doing all primes up to 100 together. OK, so what do you do with these hatches? Well, whenever you find a hatch you look to see if you've seen the same hatch before and if you haven't you make up a new row in a table. We're going to end up with a table with the rows indexed by all the different hash values and again, each hash value will just list each curve that we find. OK, so if by the end of the whole process next to one hash value you've only got one curve, well that curve is definitely not P congruent to anything else. So you can throw all those away and you're just left with the sets of curves which all have the same hash value which you expect if the number B there isn't too small. You expect those have a very good chance at least of being genuinely P congruent and here I'm only using the traces and so because some of these representations are reducible I'm really at this stage only really checking isomorphism of the modules up to semi-sympidification. Most of them are irreducible anyway. OK, so we have some sort of post-processing to do after this because we have to check that the congruences really do hold for all L which we do and we also have to do something extra in the case where the representations are reducible. I won't go into all the details of that. So this actually works very well in practice if you pick the first 40 primes more than 400,000. When I first did it I only picked the first 30 primes and I found one very annoying pair of curves kept on tripping me up. There's always the same pair of curves and the reason is that if you take these two curves so the labels are 2592.1 so 2592.1 that's the conductor A is the azoginy class and what is the first curve in the class. So those two curves have exactly the same ALs for the first 35 primes of the 400,000. In fact these curves they're twisted of each other. They both have Cm by square root minus seven so half the traces are zero anyway and the ones that aren't zero just happen to be quadratic residues, modulo, whatever the twist is. So when I chose 30 or even 35 this pair always was the only false clash I ever found. So eventually I turned up B to 40 and then I get no false clashes at all. 36 would be enough. Then this interval was just an accident. Yeah, I probably should have started at 500,000 because don't tell anyone but I have actually started computing elliptic curves of conductor 400,001 upwards. So don't tell anyone because I promised people that I never would because I'd done it enough. I spent my entire life computing elliptic curves and I really should do something else for change. Anyway, yeah, I'd have to redo this entire calculation using 500,000 probably, yes. Anyway, yeah, so there's some more work to do with reusability which I don't really want to get into. It turns out that it's actually much easier for two reducible curves to be, have isomorphic representations after surface simplification than to actually have genuinely isomorphic representations. But the nature of the further tests we carry out I won't do. Oh, yes, the other thing we do have to do is I haven't proved p-congerants here. I've only proved that the ALs are congruent for this particular set of 40 primes. So in order to prove that they really are congruent then we use a criterion of Kraus which is based on the the stern bound to show that it's enough to check after we find these congresses, it's enough to check a rather large list of primes L, L up to some rather large number. And I actually did that for all of these so they all were genuinely congruent. Okay, so at the moment I'm not telling you which primes I did this for except to say that I did this for all primes up to 100. So here's the results we get. So I put in p equals 5 although we're mainly interested in for the primes that are greater than 7 because for 2, 3 and 5 there's a certain curve has genus 0 which I'll show you in a minute which means that for every curve there are infinitely many other congruent curves, even infinitely many different Jn variants and so on. So mod 5 congresses and let alone mod 2 and 3 are just too common to be interesting. So that's the fact over 100,000. So when I say sets here, this is sets produced by the hashing operation. So these are sets of isogenic classes with the property that any curve in any one of those isogenic classes is mod p congruent to every other curve. So there are 102 and this is the number of sets which have size greater than 1. And in fact the maximum set hit size here was 18. And for the reducible cases we've got sets of 430 different isogenic classes all have the same mod 5 representation up to 70 simplification. It's not so surprising. The thing is just determined by a couple of characters with values in F5 star. Right, so let's move on to p equals 7. So you see 7 is a sort of the sweet spot. After 7 things start to get rare and before 7 they're just so common. They're not interesting but 7 is sort of they're kind of not so rare and not so common either. We have 20,138 sets of size 2 or more of which most of them just under 20,000 are irreducible and these sets have size at most 5. The reducible ones there are 255 sets and some of them are quite big 76 but this is only up to semi simplification. I'll refine it on the next slide. No, not on the next slide. I'll refine it down here already. So for p equals 7, so what happens? So why is the 337 bigger than 255 when I've thrown some away? Well what's happening is that within this set of this collection of 255 sets of size up to 76. For each of those sets we then further refine them into subsets where the congruence is the isomorphism between the modules is not just up to semi simplification but they're actually isomorphic. So each one of those up to 76 sets gets subdivided further. That puts the number from 255 makes it larger and then we throw away the sets of size 1 again and that gives you the 337. And after doing that process none of the sets is bigger than 4. So genuinely isomorphic mod p representations for p equals 7 we find we never find more than five isogymnig classes with these mod 7 congruences. Right, so for p equals 11 there are still quite a few. They're all irreducible and they're always just pairs, no bigger sets. Similarly for 13 we go down to only 150 pairs and for 17 there are only eight pairs. And even as I think I mentioned on an earlier slide for 17 the eight pairs that's a bit of a cheat. There really is a really only one pair the others are all quadratic twists. We'll talk about twists a bit more later but if you take any pair of p congruent curves and you make the same quadratic twist on both of them you get another pair of mod p congruent curves even with the same symplectic type. So I could be counting this whole thing up to twist that's a bit hard to do. I'll say a bit more about that later. OK, so those are the congrwences you find and for 19 up to 97 you find nothing. I think until I did this I'd done 19 and 23 but I hadn't systematically gone all the way up there but I did it using this parallelisation that I spoke of earlier. OK, so far I haven't said anything about whether these congruences are symplectic or not. So now let's go on to do that. So when we started this project Nuno Freitas had had this his great long paper it's over 100 pages which I gather is now to appear with a whole lot of different local criteria for determining whether a given congruence is symplectic or is he symplectic. So in their work they assume that E1P and E2P are congruence and they give various criteria which I'm not going to explain there are lots of them and some of them are simple some of them are complicated saying that if this test passes it's symplectic or if this test passes then it's anti-symplectic. And he came to me because he said I'd like to know how good their tests are at actually distinguishing these congruences on our actual database of real curves. Because they knew that the local criteria did not apply to every congruence. But they and we did actually come up with some examples of curves where none of their criteria work although they actually work really well because they apply to all the congruences we found in the database. We had to go way beyond the database to find cases where their criteria don't work. So on the one hand so all of these congruences that I found by going through the database I gave all these pairs of curves to Nuno who came back and told me whether they were all symplectic or not. And in the case of p equals 7 I did an entirely independent calculation based on modular curves which I'm going to tell you about which also decided whether they were symplectic or not and happily the results agreed. That was only for p equals 7 that we have these two different methods. So yes okay there are lots of these tests some involve primes of bad reduction some involve primes of good reduction some of them are fast so you try those first some of them are slower. Nuno has an implementation of all of this in magma. He hadn't implemented them all before we started I kept on giving him more and more pairs of congruent curves and he had to keep on implementing more and more and more of the criteria. Until by the end he'd implemented every single one of them they were all used but he had 100% success rate which was nice. Okay some of the global methods I'll mention towards the end of my talk are able to decide on the symplectic character of congruences when none of these local tests work. Right so as I said for p equals 7 I used a different method which meant that in a sense we didn't need the local criteria but we did need it for all other primes and so this is based on modular curves so I'll explain that briefly. So if you fix a prime p then we have a modular curve x of p. The first speaker this morning also mentioned modular curves so I know that's allowed to do. So x of p is a modular curve which roughly speaking parameterizes curves with a level p structure. So it's really gives it parameterizes triples consisting of elliptic curve and a basis for the p torsion. There are subtleties in there which I won't go into. Okay so these curves are well known for p up to 5 they have genus 0 and they have infinitely many points of any field for x of 7 has genus 3 and the usual model taken for that is the client cortex. It's a plain cortex curve. Next prime 11 the genius is already 26 so using these things in practice is not going to be that easy for p equals 7 it's very useful. So it's not the curves x of p will need directly we actually need twist of those. So if you fix as well as p if you fix an elliptic curve e and let's be concrete let's say picks an elliptic of e of a q then there's a certain twist of the curve x of p called x of e of p. I'll call it x of e plus because this is to do with symplectic congruences and we'll have an x e minus later. So it's a twist of p in the sense that they are isomorphic over the cube are so it's again a curve of the same genus and the non cuspidle points on that each point on that corresponds to a p congruent curve. So strictly speaking the different rational points on on x e plus of p correspond to actually pairs of another curve and these and a symplectic isomorphism up to scaling. So if you kind of had you if you could get your hands on a nice explicit model for this curve x e plus of p you might be able to find rational points on it and hence find congruent curves. We're going to apply that in the other direction because we've already got a congruent curve and we want to see does it come from x e plus of p. So that's for that's for symplectic congruences. There is a there is another twist called called x e minus of p which does the same thing for anti symplectic congruences another twist that has the same genus. OK, so so for p equals seven crasson helper stat back in 2003 gave an explicit model for this is really explicit if you if you tell me that the elliptic curve E is y squared equals x cube plus a x plus b. You have those two numbers a and b you just their formula for x e plus of seven is just a it's a ternary ternary quatic x y x y z and each coefficient is a polynomial in a and b you just plug them in you get the quartet. And of course that's that by itself isn't good enough for each point on that you need to know what's what what is the congruent curve. And so even to know this j invariant you have the map from x e plus of seven down to the j line that's a map of degree 168 so 168 is there because that's the order of PSL 27 in general it would be the order of PSL 2 p. And so that just gives you the j invariant of the curve but we we interested in elliptic curves not just up to twist. So you you need formulas to tell you what the actual coefficients the actual invariance of the as urgent of the congruent curve prime are Crescent help us that's paper gives slightly incomplete formulas they are there they're valid away from a small finite set but when you go through for you know two million examples you hit the exceptions pretty soon so so I needed to fill in those gaps. That there's a model for x e minus of seven in a paper of poon and stall called twists of x e of x of seven but. I beg you I beg you to tell him I'll change them later. Right I only remember to put your name in yesterday I said I only remember to put your name in yesterday anyway. So more formulas for x e plus or minus seven giving you the complete formulas for the invariance of the congruent curve given by a. Tom Fisher who goes through absolutely everything in complete detail and for both the x e plus and the x e minus of seven and he does it for the genius 26 curve I haven't yet quite had the energy to implement all the formulas for x e plus or minus of 11 but. I will one day but I have done for x e plus or minus of seven. So this is how we apply them so this is the algorithm so. This algorithm actually works over over any field maybe exclude characteristics 23 and seven so let's say characteristics zero so. You give me to elliptic curves E and E prime and I'm going to tell you whether they are symplectically congruent or anti symplectically congruent so this is what I do I compute. The J invariant of the second curve but I compute both of these twists of the modular curve attached to E x e plus a seven x minus of seven and then the next three steps I do once faxie plus and once faxie minus. So we have this explicit rational map J and so we take J of e dash to be on the in the in the code of main and we pull it back so a priori you might get 168 preimages you never get so many. If you get no preimages to find over K then then they because definitely not symplectically congruent if you get one or more then you have to work a little bit harder because then you found that. E is congruent to occur for the same J invariant is E prime but not necessarily E prime itself so then you that's where you use the supplementary formulas so for every points you found in the in the preimage so these are all the points who's J invariant is J of E prime you actually find a model for the associated curves are called a double prime and then you have to check whether any of the double prime you've got is the. E prime. And if so you've got a symplectic congruence and otherwise you haven't and then you do the same thing that anti symplectic so here we don't even need to assume that they are congruent at all to start with. So I carried this out for all of the congruences we all of the pairs of congruence curves we found for P equals seven and so said I came to a conclusion as to whether they were congruent or not and and it agreed with what you know fright acid computed using his local criteria. Right so I haven't done it for 11 yet. Right so what are what are the results how many of these congruences are symplectic in our menu anti symplectic. So I got slightly tired of typing in long numbers when I was preparing the talk so I probably haven't told you everything. You may remember we had 19,883 non trivial sets of iso geni classes with irreducible mod seven representations which are mutually isomorphic. Each of those iso geni classes might have various isogenes but we know how to test whether they induce symplectic or anti symplectic congruences depending on whether the degree is a quadratic residue mod seven. And what happens is that in about two thirds of the cases all of the isomorphisms are symplectic and in about one third of the cases you get both anti symplectic and symplectic. So in that second case you divide all the curves and all those isogenic classes into two subsets so that within each subset the congruences are all symplectic but between the two subsets is anti symplectic. And in the in the 12,000 first cases the second set is empty. So that's the that's the result of that that's for the irreducible ones. I don't think I wrote down the numbers for the reduce there are far fewer reduce reducible meant means that there is a seven isogeny. And you have to go through some business of replacing one of the curves with a seven isogenist curve sometimes it's a it's a bit of a mess to explain but I decided to leave that out. Okay anyway the the upshot was as I think I said to a few times already that the the local criteria certainly agree with this method and. And they were always sufficient to decide on the congruence. Okay for the prime is greater than seven so that's just 1113 and 17. We only have the local criteria to work with and his his his the results. So this is the number of congruence pairs that we saw before and this is how many a symplectic how many anti symplectic for 17 there all anti symplectic will this again there's only one up to twist. I don't think it's very easy to say from those figures whether symplectic is beating anti symplectic or not. Yeah. So. You find any popular voice. I wasn't we weren't looking at that particularly in Croson help us that's paper on XC of seven they do they find for some curves that there are there are many I can't say. But there are some curves which are congruent to many other curves. But I wasn't actually looking for that specifically. Drew. You. I haven't had to do that. The thing is this these these numbers will include quadratic twists which really should be eliminated. It's not so hard here because there are any pairs. So you know you twist a pair and you get a new pair. But it might be that one of the conductors is now outside the range and then you won't see that pair again. So it's a bit artificial. And when we have the larger sets like we had sets of mod seven we for people seven we had sets of you know five mutually congruent isoginly classes. But then if you do a quadratic twist you know that set of five might go down to a set of four because one of them is outside the range. And so I wasn't quite clear how to do that. What I did do is just count all the individual Jn variants. So that might be it's not quite what you are asking. But yeah OK. No I haven't tried to do that. I'd be interested in talking more about that more with you. OK. So I mentioned our theorem about frame frame. The prime is a conjecture and so where we prove that there were no conferences module only prime greater than 100. And 100 isn't around number 100 is actually the number that works here as I was able to explain. So first of all for the curse we divide this into comparing curves of the same conductor curves of different conductors. For curfs as the same conductor we really do just look at all pairs of curves of the same conductor up to isoginny. And we just we just look we do the operation that I that I mentioned earlier we just take the GCD of this difference of traces for putting in more and more else until the GCD goes down below 19 and then we stop. So there are no conferences between curves of the same conductor. For curves of different conductors then Nuno came up with this with this lemur based on some results of Kraus again. It says the following that if you have a more p congruence where p is at least five and a different and you have different conductors then one one way around or the other you can order the conductors in such a way that's safer for one of the curves that say the curve I. There is a prime of multiplicative reduction so prime dividing the conductor exactly once such that if you look at the exponent of the minimal discriminant at that prime is divisible by p. So all we have to do is we go through all of the conductors up to 400,000. Take all the primes that divide that conductor once find out look at all the ellipticers of that conductor where which have then multiplicative reduction and look at all the exponents we see. And just collect all the exponents we ever see. And if you do that you find you get you get exponent 97 you do not get any exponents greater than 100. In fact you get every single exponent up to 97 except 89 so there's a fact about the number 89 that you didn't know when you woke up this morning that you know if you look at all of the factorization of all the minimal discriminants of all the ellipticers in the database at multiplicative primes well at any prime in fact you'll find you'll never find an 89th power. Anyway so that proves that there are that actually that doing all primes up to 100 was rather a happy choice. Because it's exactly what you need. Okay in my remaining few minutes I'll tell you some of the things we found out about twists quadratic and I probably won't have time for the courting and sextic ones. So here's the fact I mentioned already is that if you take a congruence you apply the same quadratic twist on both sides you get in another congruence and it's the same symplectic type. Okay so I could have just tried to show a thing up to twist as I said already that's quite hard to do but you can count the Jn variants so these are the numbers. How many different Jn variants of ellipticers with a mod 7 congruence did we find about 10,000. And a much smaller number with the reducible ones and by comparison there are over a million different Jn variants in this two and a half million curves and a million different variants. So it's about 1%. And those are the figures for the other three primes. So for p equals 17 I think I've given you that example already so let's move on. These are the two Jn variants of the two mod 17 congruence and that's an anti-symplectic congruence. Okay what about other things between twists? Well we found some phenomenon actually while we were looking for examples where the local criteria didn't apply and here's something that happens whenever the image is, if you focus on the middle of these three, is whenever the image of the mod p representation, so some subgroup of GL2 of p, if it's contained in the normalize of a carton subgroup but not contained in the carton subgroup itself. So if that happens then in particular the projective image dihedral it's not quite if and only if because there are some other cases. When this happens you can always construct a quadratic twist between the mod p representations and I'll very briefly show you how that works but it actually works both ways as well that if you have a congruence between quadratic twist and you apply just a few mild conditions on top then in fact the image is contained in the normalize of a carton. So these two with some mild conditions that ensure that the image isn't too small these last two are equivalent. So this is how it works. So your image is contained in this carton subgroup normalizer and so it has a subgroup of index 2 which is the intersection of the carton and all the matrices in the normalize of a carton that's not in the carton itself all have trace 0. So here's 0 mod p because it's a mod p representation. OK so if you twist in the right way to do the quadratic twist flips the sign of half the traces. So if you do make the right quadratic twist you're flipping the sign on all the traces that were 0 anyway so you're not changing the traces. So as long as it's irreducible you get the same representation. That's how that works and the very similar argument gets in the other direction. So in this way you're able to not only find congruences but actually in this situation you can also determine whether they're symplectic or not. So this is a nice theorem about this which is I'm very sort of proud of the proof of this. It's one of these things where the first proof took several pages and horrible calculations and now it's really nice and succinct and I'm happy with it. So here's the results. So if you have one of these situations where you have a p congruence between two quadratic twists E and E D. And the images contain in a normaliser or a carton subgroup but not in the carton itself. And so the carton subgroup is either split or non-split and p is either plus or minus one mod four. And if you tell me those two pieces of information I'll tell you whether it's symplectic. It's symplectic if it's split in one mod four or non-split in three mod four and otherwise it's anti-symplectic. And we see examples of that. Usually, yes, okay, usually when you have this carton normaliser situation that's true for a unique carton subgroup. But when the image is quite small, not so small that it's the whole thing degenerates, but when it's quite small it can happen in more than one way. So this is quite fun. If the image is C2 cross C2 then there are two ways of seeing that as a group of order four with a subgroup of index two. And so you get three different quadratic twists which are congruent in this case. So if you apply this in the situation where I had my map from say Xe plus of seven down to the J line, in this case the J of E prime will have three pre-images, or maybe four counting the base curve. So one way in which I came across these results is asking the question how many pre-images can you have at least over Q anyway. So here's what happens. So you have these three different subgroups of index two and sometimes they're all three, all three congruences are symplectic and sometimes one is symplectic and the other two are anti-symplectic. That's the only thing that can happen. The first case can only happen if K contains the quadratic subfield of the P sub-atomic field. It's K a join, sorry, if K contains a square root of plus or minus P, whichever is one word four. In that case then all three are symplectic and in the other case one is symplectic and the symplectic one is the one which is a twist by the square root of this P star specifically, nothing else. I think I have an example of that and the other two are anti-symplectic. So here's an example. So the conductor is two times three cubed times eleven squared. So it's certainly plausible from this that you take a twist by minus three and minus eleven and you might get the same conductor and in fact you do. And you get a symplectic congruence to this one and notice this is P equals three, three congruence. So if P equals three then three star is minus three. So you get a symplectic congruence with this minus three twist but you also get anti-symplectic congruences to these other two curves. And these can be verified by the local criteria which we did. And okay, I've got one brief thing to say about quartic twists and one brief thing to say about sextic twists. Very brief, very brief, very brief, one slide each. And okay, if you look for quartic twists then you find that there's nothing interesting at all. The only time you ever get isomorphism between two curves. So for quartic twists you have to have Jn variant 1728. The only congruences you ever get are between the curve with this coefficient A and the curve with this coefficient minus four A. And that's boring because they're too isogenous so it's symplectic or not according to whether two mod eight, P mod eight. So there's nothing interesting there and it's even less interesting if minus one's a square because then they're even isomorphic. For sextic twists we have just one strange thing so I really wanted to get to this. Here the result, we've only done it over Q and for P equals seven but this is what happens. You again, you always get a kind of a boring twist with an isogenous curve so here now the parameter is in XQ plus B. This is the B that we're twisting so if you place B minus 27B you get a three isogenous curve. Three is a quadratic non-residue mod seven so that's anti-symplectic. That's no surprise but this is the funny one. This is one that we found by observation and then I managed to actually prove it always works. If you take the curve XQ plus B and the curve XQ minus 28 over B they are always, they always have symplectically mod seven congruent with each other. And why, well you just work it out and it happens. I have no idea whether it's a generalisation of this for larger P. I hope there might be but we haven't done it yet and so I'll stop there. Thank you. APPLAUSE So there's still time for perhaps short comments or questions. The Welsh. Yes. I can hear you. OK. So one thing that might be relevant is to look at diagonal modular surfaces. So that's basically taking this twisted modular curve, fibre it over the J-line. And the geometric invariance of those curves have actually been determined by Cynion-Shunz. Yes, yes. So for instance for seven and eight, I think it's a little bit more complicated. So for seven and a symplectic you get a rational surface. Yes, sometimes you get a rational surface. Whereas for the end of the symplectic it's not, it's a clear three. So that would already explain why he gets way more symplectic mod seven congruences than anti-symplectic. I think it's by looking at the geometry of those surfaces that people reached a conclusion that probably there wouldn't be anything greater than 23. So that's looking at the surfaces rather than just the curves. So you get a surface I guess when you let E1 and E2 both vary instead of fixing E1. Yeah. Digo general type, I've waited for that. Yes, yes. Any more questions or comments? So I think we may expect John again.
In this talk I will describe a systematic investigation into congruences between the modptorsion modules ofelliptic curves defined overQ. For each such curveEand primepthep-torsionE[p] ofE, is a 2-dimensionalvector space overFpwhich carries a Galois action of the absolute Galois groupGQ. The structure of thisGQ-module is very well understood, thanks to the work of J.-P. Serre and others. When we say the twocurvesEandE′are ”congruent” we mean thatE[p] andE′[p] are isomorphic asGQ-modules. While suchcongruences are known to exist for all primes up to 17, the Frey-Mazur conjecture states thatpis bounded:more precisely, that there existsB >0 such that ifp > BandE[p] andE′[p] are isomorphic thenEandE′are isogenous. We report on work toward establishing such a bound for the elliptic curves in the LMFDBdatabase. Secondly, we describe methods for determining whether or not a given isomorphism betweenE[p]andE′[p] is symplectic (preserves the Weil pairing) or antisymplectic, and report on the results of applyingthese methods to the curves in the database.This is joint work with Nuno Freitas (Warwick).
10.5446/53526 (DOI)
Well, let me first thank the organizers for this nice opportunity to be here in Lumini. This is my second time and this is a wonderful place to do mathematics. So thank you very much, Saeed. And this is a work of geometry and algebra, so I decided to talk about something with a geometrical flavor, but I am an algebraist, I am working on non-associative algebra, so most of the talk will be about some classification of some algebras involved in the reality phenomenon. OK, so what the hell is this reality? And if we look at the dictionary, it doesn't say much. The state of being three-fold. Three-fold composes three parts, so not much information. But if one looks at Wikipedia, then there is much more information there, and it says about something related to the Dinkin diagram before, which is the most symmetric of the Dinkin diagrams of the simple algebras. It has to do with outer or amorphous of order three in the algeba of 84 or in the spin groups of degree eight or the orthogonal groups. And then it says also that there is a geometrical version of reality, which analogues to the duality in projective geometry. And it involves one, two, or four-dimensional subspaces in some eight-dimensional space, and this is called geometric reality. So my idea is to review a little bit what geometric reality is, and then to move to a class of non-associative algebras closely related to octonions that are responsible for this reality or are at the heart of this reality phenomenon. And then if time permits, I will go to algebraic results related to reality. So geometric reality, well, you all know about duality in projective geometry. And in a projective space, there is a duality between points and hyperplanes. So for instance, in a projective plane, if we have a couple of points, there is a unique line that goes through these two points and, duly, given two lines, they meet in a unique point. So we have this dual phenomenon in projective planes. And the same, in general, in projective geometry with points and hyperplanes. Well, what about reality? Well, we have to start with an eight-dimensional vector space with a non-degenerate quadratic form of maximal mid-index, so index four. So it means that we have four-dimensional isotropic spaces. And now we are going to consider the isotropic subspaces of this eight-dimensional vector space. So the dimension can be one, two, three, or four. And then we can consider the quadratic in projective space, corresponding projective space where the points are the, the points are the subspace, one-dimensional subspaces. No, it doesn't work. It doesn't matter. Okay. So we just consider isotropic vectors. Okay? So the points are the one-dimensional subspace, spanned by an isotropic vector. No. No. Okay, so we have in corresponding to one-dimensional vector subspace, we have the points, then the lines that correspond to two-dimensional isotropic subspaces and similarly planes and solids. And it turns out that there are two kinds of solids. We say that two solids are of the same kind. So remember solids correspond to four-dimensional maximal isotropic subspaces. And two solids are of the same kind if the intersection as vectors of the spaces has even dimension. Okay? And it turns out that two solids are of the same kind if one can move from one to the other by means of an element of the special orthogonal group. So they lie on the same orbit. And there are exactly two kinds of solids. So we have exactly solids of two different types. And then there are natural incidence relations given here. So points are incident if they lie on a line inside the alquadric. Two solids of the same kind are incident if their intersection is not trivial. Two solids of different kinds are incident if their intersection is a plane. And a point is incident with a solid if it lies in this solid. And with this already in 1913, the study considered the variety of solids of one of the two given kinds. And he checked that they for a quadric isomorphic or original quadric with maximal bids. And then that any proposition in the geometry of the quadric with these incidence relations remains true if we change points, solids of one kind and solids of the other kind cyclically. So we have points, solids of one kind, solids of the two kind. And we can permute cyclically. And all properties remain valid if we permute cyclically these three things. So it's like duality in projective geometry where we can interchange points and hyperplanes. And here we permute cyclically these two things. Elica, when we study this phenomenon, express this in a very nice way. Un pediarchie le principe de dualité de la geometry projective, un principe de dualité. So this phenomenon of reality is classically explained in terms of octonions or Cayley algebras. And you may be familiar with the real division algebras of the real numbers, complex numbers, Hamilton-Quatanions and octonions. And we will consider more general algebras. So let me put precisely the definitions for us. A compositional algebra will be an algebra. So we have a vector space, a multiplication, vector space of a field and a non-degenerate norm. So and the norm must be multiplicative. So we have the norm of the product is the corresponding product of the norms. The quadratic form must be non-singular. In case, because later on it will consider non-unital compositional algebras, so the unital compositional algebras will be called here, Hurby's algebras. And probably you know that they can only exist in dimension one, two, four or eight. And these algebras are the analogs of the real complex, quaternions and octonions. But if we forget about the existence of a unity element of Hurby's algebras, still the dimension can only be one, two, four or eight, although there are infinite dimensional non-unital compositional algebras of arbitrary infinite dimension. The two-dimensional Hurby's algebras are just the quadratic algebras. Four-dimensional Hurby's algebras are quaternions. And the eight-dimensional Hurby's algebras are called octonions or Cayley algebras. And we need for the geometric reality, we need quadrics that correspond to non-singular quadratic form with maximal bid index. So we need Hurby's algebras with isotropic norms. And what happens is that Hurby's algebras to Hurby's algebras are isomorphic if and only if the corresponding quadratic forms are isometric. And the ones with an isotropic form are all isomorphic for each dimension. So for each dimension two, four or eight, octo-isomorphism, there is a unique Hurby's algebra with isotropic norm. In dimension two, it's just octo-isomorphism, the Cartesian product of two copies of the ground field. In dimension four, it's the algebra of split quaternions, which is just the algebra of two by two matrices where the norm is the determinant, degree to the determinants quadratic and non-degenerate. And in dimension eight, we have the split-caylee algebra, which can be described in terms of two by two matrices where on the diagonal we have scalars on our field and in the slots of diagonal slots, we have vectors in dimension three. And the norm is given in terms of the classical, the standard inner product of three dimensional vectors. And so if we start with a split-caylee algebra, we consider the norm and the norm has a maximal BD index. So we can consider the elements of norm zero, the isotropic vectors. And then we are in the situation where the geometric reality appears. And then what happens is that the solids, I told you that we have two kinds of solids. Well, the solids of one kind are just the, sorry, I can't point, the point as so. So the solids of one kind are just you fix an element, non-zero isotropic element in our split-caylee algebra. And we just consider the elements that multiply by A on the left give zero. And this has dimension four, and these are the solids of one kind, and the solids of the other kind are just the same, but multiplying at the other side. And these are all the solids of one kind on the other. And then a geometric reality, a permutation of these solids is performed by taking the point that corresponds to the conjugate of an element, can be taken to the S A1 and then cyclically to S A2 that appear. So this cyclical permutation, this precise cyclical permutation gives a geometric reality. So all questions about points translated into the same assertions about solids of one type or solids of the other type, and you can permute cyclically in this way. And this is traditionally how a geometric reality has been explained using Cayley algebras. And then, well, this is reflected in the fact that the joint group of type D4 meets outer oromorphisms, and we have a group of outer oromorphism isomorphic to the symmetric group of degree three, which corresponds to the three outer vertices of the dinking diagram. This is the most symmetric diagram, and the symmetry of the dinking diagram is given by the symmetric group of order three. And these outer oromorphisms correspond to the geometric reality. But in 1959, TITS realized that there were two different kinds of reality. If the characteristic is different from three, the characteristic of the ground field, there is one reality corresponding to these outer oromorphisms of order three, where the fixed subgroup is of type G2, and the other reality corresponds to the situation where this fixed subgroup is of type A2, like SL3. In characteristic three, the situation is more involved. One kind of reality is given, again, G2 appears, but at the other type of reality, the group that appears has dimension A, but it's no longer simple. It has a large uniproton radical, and so the situation is much more involved in characteristic three and much more interesting, from my point of view. And the first kind of reality, the one where G2 appears, is related to octonions, but the question is whether there are algebra that are responsible for the other type of reality, this new type of reality. And the answer is yes, and these are the Ocubo alias. And these are names after Ocubo, which was a mathematical physicist, but he had a very nice mathematical mind. He passed away a few years ago, and considered not just algebras over the real or complex numbers, but considered algebras in general and found some very nice things. So let me show you how these Ocubo algebras were defined initially by Professor Ocubo, Sumo Ocubo, in 1978. He was studying some leadmissible algebras, so vector spaces with a multiplication such that when taking the lea bracket, one gets a lealibra. And among these algebras, he considered lealibras of type A, so tracero matrices of certain size, and in particular, when he considered three by three matrices, tracero, so the lealibra is three, he considered a new multiplication in these matrices. So he multiplied in the x times y, and then y times x, and considered a linear combination, a suitable linear combination of the two products. But of course, the trace of this linear combination may fail to be zero, so one has to adjust to get a zero trace. So the last error is just an adjustment to get, starting with two zero trace matrices, to get another zero trace matrices. So he considered this product and the NOR given by the trace of the square adjusted by minus one half, and it turns out that this is a compositional algebra. So this is a non-degenerative quadratic for this NOR given in this way, and it is multiplicative. The NOR of the product, the star product that we have just defined, is the product of the NOR. Well, this was created inside three by three matrices, octonuses are not associative, and this was a kind of compositional algebra defined in a very associative setting. Not only the NOR is multiplicative, but there is an extra property that will be quite important later on, so keep in mind the property on the right, it is a weak associativity. So this is called the flexible law, but x times y times x, it doesn't matter whether you multiply first x times y, and then x or the other direction, and both things are equal to the NOR of x times y. So this is quadratic in x and linear in y, so being quadratic in x, it is quadratic in a very specific way. So the NOR appears. And see, what happens is that this can be defined over any field, well, in principle, characteristic difference, because we need the primitive cubic rule of unity to define this precise linear combination. Also characteristic difference, because in the NOR we have this one-half, but well, if you expand the trace of x square where x is zero trace matrix, three by three zero trace matrix, well, you expand this trace of x square and that two appears. It's twice something, so you can simplify the two. So this makes sense also in characteristic two. The NOR makes sense also in characteristic two. So this makes sense over any field of characteristic difference three. But the problem that's encountered when dealing with reality was in characteristic three. So Okubo and Osbor, a few years later, gave a definition of the pseudo-tonian algebra in characteristic three, giving just the multiplication thing. I want to show you how to deal with this in a more nice, let's say in a nicer way. So let's start again with just three by three matrices, say over the complex numbers. Very easy. So no doubt we have the primitive cubic rules of unity there. And just take these matrices, the problem matrices, both matrices, the cube is one, and they almost commute. They commute up to the primitive cubic rule of one. And then define these elements in this way for any pair of elements modulo three, with the exception of the couple zero zero. And this gives you a basis of SLC, of the zero trace matrices. Ok, and then if you multiply these elements, omega disappears in the product. So the product of two of these basic elements gives you either zero or another basic element and the coefficients that appear are just one or minus one. It depends on the determinant. So this is an easy exercise. So the omegas disappear. So we have a basis where the multiplication table doesn't depend on this omega, where the constants that appear are just integers and actually just one minus one or zero. And the norm of these elements, all these elements, all these basic elements are isotropic. And when we take the corresponding bilinear form, they are either orthogonal or the norm, the corresponding bilinear form, the value that it takes is one. So everything works very nicely. And therefore we can do something similar to what is done to define classical simple li-allibas, finite dimensionals, classical simple li-allibas, or in characteristic p. We're going to start with the complex li-allibas. Then one takes a Chevalet basis, the structural constants are integers, and then one can define these alibas over an arbitrary fit. One has an algebra defined over the integers, and this is what we can do here. We can define the span of these basic elements just over the integers. So we have a kind of Chevalet basis. And the multiplicative constants are integers. So this is well defined, and now we can tensor with any field and get this algebra. So this is a way of defining the split-occupable algebra over an arbitrary field, even in characteristic p. And nowadays the forms of these split-occupable alibas are called occupable alibas in general. Okay, so symmetric compositional alibas. These occupable alibas are an example of this kind of alibas, of compositional alibas, so we have a multiplication, we have a non-singular quadratic form, which is multiplicative. We don't impose the existence of a unity of the algebra, but we impose some extra condition, and the extra condition is this kind of associativity of the norm, okay, that you can see here. And this is equivalent to the condition I explained to you, and I told you, keep in mind this condition, because it will appear later on, this weak associativity of the product that we mentioned. So cubo alibas are examples of these alibas. So these are called the symmetric compositional alibas. So we forget about the existence of unity, and but we impose this extra associativity condition on the norm, which doesn't appear for octaves. And it was Marcus Rose in the 90s who realized that this is the nice or the right class of alibas that allow us to deal with the reality, with the two kinds of reality that it has considered. So examples will occupy alibas, satisfy all these things, are symmetric compositional alibas, and the other natural examples are essentially the octonius or quaternions or given any Hermit's algebra, there is a standard conjugation, and then we can define a new product on these alibas. So the new product of X and Y is the old product, but not of the elements X and Y, but of the conjugates of X and Y. And this gives a new product, and we lose the unity is no longer a unity in these new products, these are no longer unit alibas, these are called the parajurvis alibas, and these are examples also of symmetric compositional alibas. And it turns out that essentially that's all, any a dimensional symmetric compositional alibas is either a parajurvis algebra or an cubo algebra, so all the examples are here. In dimension 4, any four dimensional symmetric compositional alibas are paraquaternion alibas, and in dimension 2 one has to be a bit careful, especially in characteristic 3, but things are not very difficult in dimension 2. But the interesting case to deal with reality is the eight dimensional case, and the symmetric compositional alibas are parajurvis or cubo. What are the characteristics of this? Arbitrary characteristic. Arbitrary characteristic. Okay. And the proof is, okay, first we look for an idempotent of the algebra. We start with a symmetric compositional algebra, look for an idempotent, and well, either the algebra already contains idempotents or we can make a small field extension to get idempotents at most of the degree 3, the field extension. And then because of the identity that I stressed before, if we have an idempotent, the norm is 1, and then we change the product. And we define a new product in this algebra in this way, and see, whenever, it's a pity. If you put x equals to e, you get e star y star e, which is the norm of e, it is quadratic in e times y. So you get y. So at the same, if you substitute y by e by the idempotent. So this is unital. So starting with a symmetric compositional algebra, you have an idempotent, you can move to a unital compositional algebra. So these are the cubo alibas in dimension 8, these are the autonials. So we know what it is. And moreover, the idempotent allows us to define an oromorphism of order 3, well, of order 1 or 3. And then if the oromorphism is the identity, we have a para-herbis algebra. Otherwise, we can study the eigen space, the composition for these oromorphisms and distinguish between para-herbis and okubo alibis. They appear. There are many details hidden here, but this is essentially the idea of the order proof. So this new symmetric compositional alibas essentially come into flavors. Either they are para-herbis, so essentially these are the octonials, quaternions and so on, where we have modified a little bit the product, or we have this new class of okubo alibis. OK. And now this is what happens. But, Vani, already considered for the split-keylet algebra, we can start with any compositional algebra with isotropic norm and do exactly what we need. Consider solids of two types. Depending we fix an isotropic element and consider the elements that multiply, give 0, we multiply by a on the left or on the right. And these are the two kinds of solids that appear. And we have the corresponding geometric reality between points and the two kinds of solids. And here there is no need to put any conjugate anywhere in the points that we have when dealing with the acton. So this is a very clean way of showing reality. And the two kinds of reality that these have realized appear, well, depending on whether we want to start with a para-octonion algebra or a okubo algebra, the split para-octonion algebra or the split okubo algebra. OK. So, what do these alibas look like? It's the classification of okubo algebra. We have just defined the split okubo alibis and then the forms, so the algebra that we have in extending the scalars to the algebraic closure become the split okubo algebra. These are the okubo alibis. But what do they look like? And this is quite natural. So if we start with a field that is characteristic different from three that contains the primitive qubits of one, then it turns out that the, let's say, the affine group scheme of oromorphism of three by three matrices and the corresponding group of oromorphism of the split okubo algebra, which is given in terms of zero trace matrices, are isomorphic. So the classification of forms of three by three matrices is a very nice, simple associative algebra. And of the split okubo algebra is similar. And we get the natural result, although it was not obtained this way, that the isomorphism classes of okubo alibis correspond to isomorphism classes of central simple degree three associative alibis. So you start with a central simple degree three associative algebra. It is either the algebra of three by three matrices or a divisional algebra of degree three. And one performs the same construction given by okubo with omega, omega squared and so on. And one gets okubo alibis and all of them are obtained in this way. If the field doesn't contain the primitive cubic groups of unity, then, ok, we extend the scalars adding a primitive cubic group of unity. All these in characteristics differ from three. And well, okubo alibis correspond to degree three simple associative alibis with evolution of the second kind on this field extension, degree two field extension. And essentially okubo's construction works again, the only thing is that one has to take not the whole set of zero trace elements in our algebra B, but the skew symmetric elements with respect to the evolution of the second kind. But essentially the same construction by okubo works and one gets in this way all okubo alibis and the isomorphism classes of okubo alibis. In characteristics three, nothing of this works. So in characteristics three, the classification was given in 1997, but it's only recently that we have understood really what's going on there. And what happens with the split okubo alibis with a field of characteristics three is that the group of oromorphism, the group scheme of oromorphisms is not smooth. So the smooth part has dimension eight and this is the group that Titz had realized. We have an eight dimensional group with a large unipotent radical, but here what happens is that in one considered group scheme, this is the group scheme is not smooth. The dimension is eight and the Lie algebra has dimension ten. The Lie algebra of the oromorphism group is the Lie algebra of derivations and the Lie algebra of derivations is simple of dimension ten and it is one of the non-classical, simply alibis over fields of prime characteristics. In this case, characteristics. No, no, no, no. It is one of the Cartan type alibis of type H with some deformation. Yes, very good. And it turns out that the group scheme of oromorphism is the product, but not the semi-direct product, the semi-direct product is just product of two subgroups, the smooth part of dimension eight, which is H, and then a diagonal part, which is the Cartesian product, two copies of the scheme of cubic roots of unity. So there are no rational points over a field of characteristics, but as the schemes, it appears there and it corresponds to a gradient by two copies of the integers, Mugloth, Rhe for cou-valus, an important gradient of this algorithm. And then what happens is that at the level of the first homology set, and well, these are not smooth, so these are homology sets in the FPPF topology. If you want, there is a map induced from the embedding of D into the oromorphism group of A, so it induced, even though this is an embedding, one gets a surjective map at the level of homology. So it means that the forms of orcuboalgebra that are classified by the corresponding homology set on the right, well, any orcuboalgebra must be given in terms of a couple of parameters, and this corresponds to mu3, cross mu3, couple of parameters in F modulo FQ. So and this is exactly what happens. So the classification characteristics is completely different. A way to obtain all orcuboalgebras consists of just taking the split orcuboalgebra I considered before, tensoring with the algebraic closure of the field, considering two non-serial scalars, alpha and beta, and considering the basic elements, X, I, J, that I considered before and making some substitutions. In this way, one gets orcuboalgebras that depends on two parameters, and one gets all possibilities. So this is very technical, go rematch. So the message is that characteristic 3 is completely different. So the result was obtained in 1997 and says that orcuboalgebras in characteristic 3 are classified in terms of two parameters, depending on two parameters, and the algebras obtained from different parameters are isomorphic if and only if a condition on these scalars, alpha and beta, are fulfilled. And that's it. So classification characteristic 3 is completely different. Okay, and in just a couple of minutes, if you allow me, yes, a little bit of algebraic reality, how to relate all this, I told you that these orcuboalgebras are there, well, these symmetric composition algebras are the right class of algebras to deal with reality and the way to proceed these forms. We're going to start with a symmetric composition algebra and considers left and right multiplication by elements in this algebra. And remember this weak associativity condition that I stress several times, that X times Y times X, it doesn't matter whether you multiply in one order or the other. And this equals the north of X times Y. Well, this translates into this property, the left multiplication by X composed with right multiplication in any order is a scalar multiple of the identity and the scalar multiple is given just by the north. So this means that this matrix on the right, the square of this matrix on the right is just a scalar multiple of the identity given by the north. So we have an algebra and elements in this algebra whose square is given by a quadratic fork. So this gives immediately a map from the corresponding cliff or algebra with the north in the algebra we are considering which are two by two matrices of endomorphisms of the symmetric composition algebra. And this gives an isomorphism of the cliff or algebra of the symmetric composition algebra into an associative algebra, the algebra of endomorphisms of two copies of the symmetric composition algebra. And then the canonical involution in the cliff or algebra corresponds to the canonical or most natural involution that depends on the north on the right. So we have this isomorphism and see the spin group of the north end lives inside the cliff or algebra. So one can consider the spin group which consists of the even invertible elements that preserve under conjugation of vector space which in this case is the symmetric composition algebra and has a spin or the north one. And for any element there, this is in the cliff or algebra so we can move to a couple of two to a matrix of endomorphism but it is even. This is an even element and the even part consists of the diagonal, you know this is too quick. And what happens is that in this way to any element in the spin group one associates a couple of endomorphisms of the symmetric composition algebra and we have an exaendomorphism which is conjugation by u on top, uc, u minus 1. So we have three different representations of the spin group here given. One is the natural representation on the vector space c which factors through the orthogonal group and the other are these two representations which are the half spin representations of the spin group and the three are related by means of the multiplication of the symmetric composition algebra. So this is the way to deal with reality in terms of symmetric composition algebra. So the spin group can be explained in terms of triples of orthogonal transformations that are nicely related in terms of the multiplication in the symmetric composition algebra. And what happens is that we have once we have this property that f not of x times y equals f1x times f2y you can permute cyclically f0, f1 and f2 and you get another triple satisfying the same properties and these are the outer oromorphism of the spin group that appear. So the symmetric group acting as outer oromorphism of the spin group is just the permutation of these three copies. So the elements of the spin group can be expressed in terms of triples of related orthogonal transformations and then you can permute these elements and get the symmetric group of degree three there. So I guess just since we have to be strict I will finish here. There is also the possibility of considering the local principle of reality. So instead of considering the spin group or the adjoining orthogonal group one can consider the Lie algebra and then if you start with the Lie algebra so the special orthogonal Lie algebra there are also related triples of endomorphisms here of the symmetric composition algebra. And then we have a cyclic permutation of these elements and this gives the outer oromorphism of order three of the reality Lie algebra. At the Lie algebra level the reality Lie algebra is isomorphic to the special orthogonal Lie algebra. So and what we have is the three projections of the set of triples to each one of the components gives a representation of the special orthogonal Lie algebra. The natural representation and the two half spin representations. So all these are related again and one interesting thing of this is that one can use these to give very symmetric construction of the Freudian-Tals-Majigas square. Well in characteristic three one has to be careful because this is valid also in characteristic three one has to be a bit careful with the second row and second column. The construction is completely symmetric and depends on two symmetric compositional algebras and even more in characteristic three there are composition super algebras and only in characteristic three we can define nicely composition super algebras. So one can extend this construction and extending this construction we found a whole new family of simple Lie super algebras finding dimensional simple Lie super algebras in characteristic three. And this is very nice and Sofia is smiling because he knows very well about this algebras. So I will stop here with this sentence of Okubo about octane algebras and other algebras maybe quite important in the future. Thank you very much.
Duality in projective geometry is a well-known phenomenon in any dimension. On the other hand, geometric triality deals with points and spaces of two different kinds in a seven-dimensional projective space. It goes back to Study (1913) and Cartan (1925), and was soon realizedthat this phenomenon is tightly related to the algebra of octonions, and the order 3 outer automor-phisms of the spin group in dimension 8.Tits observed, in 1959, the existence of two different types of geometric triality. One of themis related to the octonions, but the other one is better explained in terms of a class of nonunitalcomposition algebras discovered by the physicist Okubo (1978) inside 3x3-matrices, and which hasled to the definition of the so called symmetric composition algebras. This talk will review the history, classification, and their connections with the phenomenon oftriality, of the symmetric composition algebras.
10.5446/53530 (DOI)
So first of all I want to thank very much the organizer for the invitation and the opportunity to give it all here. I will talk about closed G2 structure so I will quickly remind what I mean for first G2 structure. Maybe many people already know but let's introduce again. So we will work on a seven dimensional manifold and for G2 structure I will mean three four seven manifold which is stable so that means that if you take this orbit under the action of the linear group this is open but this is not sufficient to have a stabilizer for the free form G2 because in general there are two open orbit and what we ask is some non-degenerate condition plus non-degenerate that means that if you take any vector field and you take the contraction a x phi wedge a x phi wedge phi this is non-zero. This is actually the condition to be non-degenerate and that means that then you have a non-zero seven form and up to a factor you have defined a positive being a four given by a very dangerous with a chalk I lose the chalk I take given by a x phi wedge a y phi wedge phi and here of course we have also the volume four. So that means that phi on the same time the term the metric plus the volume four. The stability for free form on seven dimension are similar to the stability of two form on let's say vector space of even dimension. In fact this is a part of what you have when you have a symplactic form and we will see some difference between closed to structure and symplactic form. So let's put here the name closed. Why? Oh this is too high for me. I am sorry. Okay. Okay. It's okay I will not I will not move again so that's fine. So the idea is to follow why G2 is interesting because it's one of the possible possible. Alonomy get up by the release of a compact manifold compact manifold when you know symmetrical force and when we have a long of it contains on G2 this is actually criminal and to ask that phi our free form is closed and co-closed. So this is zero plus the condition of the star where the star will be all the time the star odd with respect to the metric induced by phi. This is also zero. Okay. It was quite hard to find compact example with this with a long of equal to G2 in the talk we will forget about this condition and we will consider only the condition for the free form to be closed. That's what I mean for closed G2 structure. So the condition is weaker than the other one but we see that in fact is anyway strong condition because for instance there are no topological of suction and in the talk I will talk about such close G2 structure especially on the groups so only algebra. So the idea is to consider phi will be left in volume G2 structure on a group G and that means that we are consider G2 structure on the algebra G of G and we will see also some condition that one can put some restriction that one can put on this metric induced by phi like high sun condition or more general the condition related to the rich tensor what is called extremely rich pinch that was introduced by Robert Brian and we will see a little bit how is the behavior of closed G2 structure especially in relation to this last condition when you evolve closed G2 structure because there is Brian, Robert Brian introduced also a flow, a geometric flow that allow you to evolve a closed G2 structure and again we will see how the groups have special role for that. So let's where is here so let's start so the idea of the talk is I will review a little bit what is known about closed G2 structure in general and then we will see what happened on the algebra and we will see many different from the case for the algebra and the last part I will talk about Laplacian flow and this condition about it. So I already mentioned what I mean for G2 structure so we have a phi4 phi and in any point the stabilizer is isomorphic to G2 as I said before and as an explain then you have automatically a metric and the volume four that's a nice thing of G2 structure then we test Raffin and as I already mentioned was showed by Fernandez and Gray that the condition for the free form to be parallel is equivalent to the condition for the free form phi to be close, the point that is here okay phi to be close and co-close and that is that equivalent to the fact that the volume of your metric is as a morphic to a saguolo pojic two. Okay and a nice thing of course when I mentioned parallel or torsion free G2 structure I essentially mean that case when phi is close and co-close in this case the metric automatically which flat that was proved by Bonin. Okay so we saw that G2 structure is close it's called close or also calibrated and if phi is close and of course we when we talk about closed G2 structure I will always mean that phi is not co-close and you can see from from that expression so in this case we have that the differential of star phi which is a phi form you just get by tau wedge phi and then if phi anything will be encoded by this two form tau and tau belongs in fact in general if you take the space of two form on your standard space and you deform irreducible component with respect to G2 you have two reducible components one is a seven dimensional which is isomorphic to R7 and the other one which is 14 dimensional is isomorphic to the Lie algebra G2 so in this case tau is isomorphic to the space to the irreducible component on the space of G4 which is isomorphic to the Lie algebra of G2 and that is actually the situation that we have that means that if you take tau wedge phi algebraically that means that is equal to minus star tau and also you have these relations so the wedge of tau with star phi is zero but the interesting thing you can see from here since tau is essentially the cut of your phi then tau is co-close but the interesting thing here that if you take the this ultralaplation of phi with respect to the metric induced by phi then is exact so it is exactly the differential of tau and that's an interesting point that allow Robert Moran in fact to define this geometric flow evolving closed G2 structure is actually using that because you evolve and you evolve in the center and class of the starting closed G2 structure okay and the name calibration comes from Arby Lozub because phi define a calibration so it means that if you take any tangent oriented plane and you restrict phi to that then this is less than equal than the volume and then you can talk of course about calibrating some manifold and so on but I will not talk about that in this talk okay rich tensor since phi the terms the metric then one can expect that the the the rich and that's what's the general formula that Robert Moran found for more general G2 structure but in this case you can see essentially that the rich tensor sorry the rich tensor is completely determined by tau you can see from here you have an a part which is a multiple of the metric this is just the norm of tau squared and then another part which is obtained by this free form the free form is this tau minus 1 over 2 star tau and tau and you use the G2 map between volume up very natural again by contraction this is a map of from free form to symmetric to tensor and just given by you take the contraction as before the two terms are exactly the same like here you wedge by beta and then you take the star and then you get something that is symmetric and that's the first problem that you get in this case because you don't get any restriction on compact case of the existence of such structure because they as you can say from this expression the scalar curvature is less than equal to zero so that means that the complement for you don't get any restriction on the existence of such close G2 structure okay before I say let's see what happened if you impose some condition on the metric induced by phi and the first thing is that this is dangerous the first thing that one can see is what happened if phi the metric induced by phi is Einstein and in the compact case was proof in two different way by Clayton even though and then in different way by a rubber Ryan that if you impose a the metric is Einstein then automatically tau is zero and phi is far but there is a nice inequality that one as that was showed by a rubber Ryan that involve you see the the integral of the square of the scalar curvature with the integral of an M of the north of the Ricci square so one can ask okay it's possible to have the equality we'll see that it's possible and the condition to have this equality it means that if you take your branch of a relation you have this equality if it only if the differential tau is exactly equal to this expression so you have a part again multiple of phi plus a part which is multiple of star of tau which tau and this is what is called a stream a rich pinch and we will see that we'll have a nice behavior if you evolve close G2 structure using the Laplacian flow of course this makes sense this condition makes sense also that even not in the compass case but the level of left invariant group and so the level of the algebra okay so the first thing that one can do always like that at least for me is okay let's work on manifold symmetry right so the first question that one can ask can exist compact homogeneous manifold with an invariant G2 structure close G2 structure and that's what is no so as I said before there are no known result about the distance of close G2 structure on compact compression manifold not restriction at all but what we can do is to see some symmetry so let's consider the automotive group or your clothes or your manifold with a close G2 structure that means as usual that you take the group of the film or feel which preceded before right so they pull back and F phi is equal to phi if we look at the Lie algebra the Lie algebra can be defined in terms of the linearity so adjust the vector phi for which the lead derivative of phi with respect to x is zero and what for the style of the show that in the compact case if you have a close G2 structure which is non parallel so just thousand zero and if you have an element of such Lie algebra of a automorphism group then the two form I x phi is not only close but is also close and then you have a restriction because you have a restriction on the dimension of your automotive group is less than equal to the second but in armor of M and also less than equal to six but the important thing that is also a billy and with this you can conclude that there are no compact homogeneous example not only that but well by compact homogeneous example I mean that you have a group acting transitory and this is a subgroup of the automorphism group of your manifold with the close G2 structure they conclude more even for commogeneity one in fact you have only the seven hours but you don't talk about that okay first relation with the groups are given by example because the first example of compromise for which amit a close G2 structure but no parallel G2 structure was found by Marisa Fernandez and in fact it's a compulsion so when I mean there that you have G your group you say gamma a lattice so a dice creates a group such that the quotient G over gamma is compact and that's what is actually the first example of such manifold and later Diego Conti and Marisa Fernandez classify when I mean classify I mean here I mean that they work at the level of the algebra okay and they classify up to isomorphism then importantly algebra amiting such close G2 structure and they found only 12 classes there are also example later constructing the solvent case like almost a billion groups so it means that these are group of semi-dark product on this form or semi-dark product of our six and that one were found by Marco Freiber and later we Marisa Fernandez is also Robert Ryan the laureate found more example about commogeneity one there is also a construction of closures to structure not in the compact case but in the metric is complete and they are commogeneity one of a compass in polygroup so it means that compactness is much stronger than completeness for commogeneity one and recently with Marisa Fernandez and Kovalev and we said the whole lot so we construct an example which is different from the previous one is not locally homogeneous but is constructing adapting the Joyce construction not when when you use so you use R before metal you resolution of R before but instead of starting of T7 of this flat dollars like Joyce said we start with an in manifold so a compacution of an imposter group by lettuce and then again we take a final group and then the nice thing here that you see all the previous example are not form up in the salamium case and this is form up and we are still working on that I think we can get maybe an example we be one zero that would be nice because it is completely open that but also this case is some sense new okay so we saw before compact plus if I sign is not possible right we cannot have a compact homogeneous but then one can ask okay what happened that was something that I studied with Fernandez and Victor Manero what happened in the non-compact homogeneous case that the first thing that is still open to find a non-compact example for which which I made a closest structure for which the metric is I saw this is completely open even non-complete is not clear if there exists anyway we work on non-compact homogeneous science a manifold and that's fortunately in in dimension seven they they conjected by Alexie's kid that the answer and soul manifold exhaust the class of non-compact homogeneous science a meaningful was show by a royal and La Fonte sorry is still open in some sectional case and also in I get dimension than 11 as far as I remember but in seven dimensions that's exactly the case so here I use soul manifold soul manifold is not compact in this case but just the terminology that they use so I mean simply connect this solvable Europe with a left in body a match and what we show that in fact if you have such soul manifold I sense or manifold in some sense if the metric is I sign then this implied that the metric is flat so you cannot have that in fact we show more general even if you impose instead of fee close this is still true even if fee is co close which is quite straight because co close is much weaker than then close and the idea of proving that use nice theory that was show before very strong theory in fact because one strong result that will show by your galore it's that the ice and soul manifold are standard so that means that the the algebra or your group solvable group has is a splitting of these four and it's just a new radical plus orthogonal complement a which is a billion and so another thing that one can use that one can always reduce to let's say rank one rank is a dimension of this a billion complement of and and everything in fact is determined by it we coincide also with the commutator of it and that's that's important for us because essentially by a result by Haber that prove before the result by Loret was just for standard I sense or manifold but we know now by Loret result that the standard the standard is a metric is unique up to isometry and scale and then we use we reduce to find more of section we reduce the dimension from seven to six just using the tricky that anytime you take an a in the complement of n or togola or togola sorry or normal unit or number one then I will say in the order of complement you have a natural ratio free structure and then using that you you you can arrive to the to the results so it's sort of reduction dimension six okay so as I said before simple t structure and closest to structure can have some similarity let's say that simple it doesn't induce a metric you need to to impose an extra metric but we can see that they are completely different on at the level of the algebra especially for the unit modular case because we'll show by two and Lichnero which you know that if you have a unit modular the algebra so the trace of the joint at x is zero for any x and you with a simple structure then it's to be solvable and this is the first difference with a fellow we show that in fact it's possible to have non-solvable only modularly algebra with this property and the courier thing that you can see from here okay it's not surprising that you have this pre thing free plus four so this is the radical maximum ability unsolvable here and this is a semi simple part and the semi simple part the strange thing of this we didn't prove this geometrically but the semi simple part can be only as two are and then we also classify up to is almost how many algebra we get we get essentially four algebra a only one is really not a direct sum or two pieces okay now I show another difference okay this is why I am looking for exact closely to such a because it's completely open in the compact case it's not clear okay when you have a simple to form you know the simple to form on a compound manifold give you a non-zero there on class but when you take a copper manifold seven manifold we're closely to such you don't know if the class the class determined by fee is the exact or not and there is no nothing about that for instance one doesn't know that if the seventh sphere amines close you should such or not in particular you don't know so the motivation for the AC is okay let's look in this class where it was g over gamma all right when you have that what's happened if you have a fee which is exact team for him which is exact that's quite a natural question and okay in the simple the case was show by the uptime manga that if you take a remodeling Europe then it cannot omit any left in body exact simple to form right so the same question you can put in the two for G2 structure case okay we were happy because we found an example but not so a pillator because okay we found a unimodally algebra we satisfy this condition so the second and the third chevrolet island better number of G are both zero and I miss an exact G2 structure and then you say okay now we have example no we don't have this example is solvable is also called G2 trivial because unfortunately there is not only the unimodally of section but there is also another of section that you can have for solvable Europe for a meeting let you see and was found by Godland is a paper around 60 before the middle paper and show now explain what I mean for a strongly only model I show that if you have simply connected solvable group really a job and if it's not strongly only model of them cannot omit any lefty so strongly only modularity is a necessary condition to admit lefty's what does mean strongly only model is only for solvable the group and the condition is like that I didn't know before this example in fact yes so let's think this is our G which is a solvable the algebra let's say can be radical okay which is in particular important and let's say the central this ending serious as we say to end so we have any zero and then and no I is and and I minus one and so on okay and the condition is that if you take for any x in G if you take a cab acts and you restrict to the quotient and I over and I plus one the trace of this should be zero for any I bigger than equal than one so that's a condition that you have an extra condition that you have for in the solvable case for the distance of lattice and this example is not is not strongly more than and then okay we thought okay what's up and in general if we take b2 b3 zero it's possible to have only modular plus strictly only modular the algebra of this condition and the answer is no this is still a work in progress we have also an example with b3 zero and b2 zero but the again is you know the modular but not strongly module so probably one can prove in general that is our but we don't know yet so we see is he open the question okay let's go oh how many minutes I have 10 minutes I look that time well okay then I can okay la plush and flow so okay again we will come back to the lead group and the algebra I will see the role but that's general so the idea of la plush and flow as I say was of a right and then the idea was to use a geometric flow we'll see in a moment why is a geometric flow because it's a gradient flow in fact of the volume functional and to the form close it to structure and the idea is to obtain if this flow is converging a parallel you just actually for the moment they we are all the people working on that are far to find that we there are some result if you impose for instance that the your closest structure is also far for a parallel one that you have some convergence but it's still there are no zero result in that direction and so the flow is defined these ways so the idea is to involve the free for 15 in this way so the derivative of 50 is just as I say the la plush of 50 okay and the size is the correct one because fortunately this flow is weakly parabola so you can you can show a short time existence only if you look at flow in the direction of close you to structure if you don't look at the clothes on clothes to structure this is nothing more true and the idea is as I say since this is the exact in fact because this is just a differential of tau t which you form tau induced by 50 then 50 is in the center on class of a zero and if we look how evolve the metric you can see that if you forget about this term this is just like the rich you flow the user rich you flow for the money but you have an extra term this lower order term depend on tau depending on the social so it makes the life very different from rich you flow case and okay I just say few results about that and as I say the stationary point on this flow are just a parallel you to structure the social free to structure and it's really dramatic flow because it's a gradient flow what is called the itching volume functional that's quite natural because you take fee so you have a fee zero the starting point let's say all your flow and then you take fee in the ground class of zero and then you take the natural volume that you have associated to fee which is fee wedge the star of it that's give you a volume form on M and then you integrate on M and what is surprising that this functional is monotonically increasing along the Laplacian flow and in fact the critical point are maximum I don't think maximum and then that's what's not so easy at all they show that in fact you have short time it is at the level of the group this system in fact it should work in the variant case the PDE system become or the E system so in fact the short time is this you have you don't have such problem and in fact what you know about solution at the end is a solution for for this flow are no league groups there are many many results of that about that and also for the seven tar with commodity one close each such the one that I mentioned before okay as for the itching for flow the natural solution are the self-similar solution which are the itching soliton one can look also in this case for self-similar solution that what does mean you have your fee which is the initial one then you have a FT the film or fees you take the pullback and roti which are the function of T and then if you ask this is a solution it means exactly this is very similar to the case of which you flow it means is actually this condition that the watch the plush and of fee is a multiple of fee plus the lead derivative of fee with respect to x and everything is a journey by lambda but that situation is completely different from the itchy flow why okay of course one can talk is actually the same way and shrinking when land is negative steady when land is zero or expanding when land is positive what happened if you take a compliment if all any laplacian soliton which is not often fee has to be expanding lambda has to be positive and that's different from zero so that's completely different the picture from a rich flow and other diversity is that for each flow in the copper case all the rich is soliton are gradient in this case is still no no if you can have a gradient solution is still open okay so in fact this is that exists as a non-trivial expanding laplacian soliton is still an open problem on Compton manifold but there is a solution as we can see by paper by Laurette Niccolini also Niccolini sorry and also by my paper on the solid case one can and there are example of all the type steady shrinking expanding the steady condition is related to in fact the extremely rich pinch condition that I introduced at the beginning and let's see what happens so what I did wait for a fellow that's a natural thing is okay let's start with a similar rich pinch condition and let's evolve the laplacian flow what happened to such condition is preserved is preserved we show this is the term that we show this is general for Compton manifold if you take a Compton manifold with a closeness structure which is extremely to pinch and we show that the 50 so the solution of the flow is exist for any t in fact you can see we know ready that 50 minus 3 is an excess 4 right this is still from what I say before but the difference of the two is you can see it's just a function of t this function that says an expression that depends on the normal tau I would say that in the similar rich pinch the interesting thing dramatic thing that the normal tau is constant that's a geometric property in this case and the other part is just the tau tau is the two form associated to the original to the starting point okay and so we also show in this case that in a plush on flow is constant velocity and the rich remain the same so it seems that this is a similar to pinch have a role similar to some sense the rich is solid on for each flow so we have a different completely different situation for for the solution for anything and okay I I finish the talk saying mention okay because we have this condition and then what happened for example okay and the first example was given by Brian and was given by on the long compact homogeneous space just given by the semi-dial product of SL2C over T2 over SU2 and then we take in a quotient but this gamma is not a subgroup of the group because this M in fact can be identified with the solvable group is not a subgroup but is a subgroup of the automorphism group so it's only locally homogeneous in some sense and then later Laurette found a unimodular example this is a notation that nobody used that but anyway is instead of using bracket I just use the differential so that means that the first one two three are close and here I mean that the e4 is minus e1 where g4 and so on I just use the differential of the level of one form and these are the similar each pinch which is also steady Laplacian solitude with a fellow we proved that this is the only unimodularly group which admits such type of structure so no hope to find out that example in the unimodular case and Laurette and Ecolini show that in fact this is always the case and he left invariant extremely to pinch you to touch around a simple net to the group is always steady soliton so a natural question is okay is to the comer because from here we have that extremely to pinch in prime city so the question is can you we have that with the referral we found an example which is steady Laplacian solitude but is not a similar each of these and this example is very curious because in the case of rich is soliton you have a similar condition and the vector field x is always not left invariant right in the condition for Alex G in this case if you look at x phi such x is left invariant which makes different I don't know geometrically what that means and I think the time is over yes okay so I finish here thank you very much for for the things
I will review known examples of compact 7-manifolds admitting a closedG2-structure.Moreover, I will discuss some results on the behaviour of the LaplacianG2-flow starting from a closedG2-structure whose induced metric satisfies suitable extra conditions.